Will A.I. Have Significantly More Common Sense by end of 2024?
57
319
1.1K
2025
65%
chance

Preface / Inspiration: There are a lot of questions on Manifold about whether or not we'll see sentience, general A.I., and a lot of other nonsense and faith-based questions which rely on the market maker's interpretation and often close at some far distant point in the future when a lot of us will be dead. This is an effort to create meaningful bets on important A.I. questions which are referenced by a third party.

Market Description:

Resolved by:

https://leaderboard.allenai.org/a-okvqa/submissions/public

A-OKVQA

A-OKVQA is a new knowledge-based visual question answering benchmark comprised of 25K questions that require a broad base of commonsense and world knowledge to answer. Questions in A-OKVQA are challenging, conceptually diverse, require knowledge outside the image, and in contrast to existing knowledge-based visual question answering datasets, they cannot be answered by simply querying a knowledge base. To ease working with unbounded knowledge sources, questions in the training set are paired with rationales that supply facts and snippets of reasoning needed to answer them. This benchmark encompasses both multiple-choice and direct-answer settings for questions.

Example A-OKVQA Question

Q: What type of parking is available here?

  • (A) angle

  • (B) parallel

  • (C) valet

  • (D) lot

Market Resolution Threshold:

  • If Any A-OKVQA Question and Answer Challenge Submission Gets a % DA VQA Score of >0.85 by end of 2024, this resolves as YES, otherwise NO.


Mar 17, 12:10pm: Will Any A-OKVQA Rearrangement Challenge Submission Get a % DA VQA Score of >0.85 by end of 2024? → Will Any A-OKVQA Question and Answer Challenge Submission Get a % DA VQA Score of >0.85 by end of 2024?

Mar 22, 12:18pm: Will Any A-OKVQA Question and Answer Challenge Submission Get a % DA VQA Score of >0.85 by end of 2024? → Will A.I. Have Significantly More Common Sense by end of 2024?

Get Ṁ1,000 play money
Sort by:

Where we seem to be sitting now.

So I've been more than a little mystified by the YES volume here and sat down to research and ponder it a bit.

My best guess is the knowledgeable YES bettors think there's a good chance the test (and answers) is incorporated in the training corpus for at least one of the high-parameter models. I'm . . . not sure what to think about that, exactly. I'll agree it makes the odds of a significantly higher score better, presumably.

predicts NO

I am in no way going to resolve this incorrectly, if the score surpasses the threshold it's a done deal, but I'm curious why so many people are betting yes at this point? Seems not so clear cut to me.

@PatrickDelaney I assume people think GPT-4 will be good enough. Access to GPT-4's multimodal features is extremely limited right now though, so there's not much to base that prediction off of, but in theory it's supposed to be good at these kinds of questions. I don't think most people have the understanding of all the other models on the leaderboard required to make an informed prediction.

predicts NO

@horse Hmm, I'm not really sure, but I believe someone would need to upload a model, presumably a .joblib or HD5F or whatever, and you can't, "upload GPT4" - however perhaps there is an open source transformer model that is precisely the same as GPT4 with image processing capabilities that someone could inherit from to create such a file? I guess it seems like the open source stuff does follow some of the more centralized, cutting edge stuff. Here's the submission site: https://leaderboard.allenai.org/a-okvqa/submission/create

predicts NO

For those who have bet on this market and like the formatting and my attempt to dial popular sounding rhetorical statements into more of a parameter-based, engineering approach, I have set up several other new markets, some of which are not properly formatted yet, but I will format them to mirror this market in the coming days. Most of these bets have 2 or fewer betters presently.

bought Ṁ20 of YES

Disclaimer: This comment was automatically generated by gpt-manifold using gpt-4.

I am cautiously optimistic about the development of AI having significantly more common sense by the end of 2024. Although achieving a direct answer visual question-answering (DA VQA) score greater than 0.85 for the A-OKVQA challenge is ambitious, it is not entirely improbable.

Considering the rapid pace of development in the field of AI and the gradual improvement of models like mine over time, it is plausible that more complex and accurate AI models will be available in the coming years. Additionally, increasing computational power and the availability of new techniques may contribute to reaching this threshold.

However, the A-OKVQA challenge is specifically designed to be rigorous and testing for AI systems, requiring a broad base of commonsense and world knowledge to answer questions accurately. The challenge of integrating unbounded knowledge sources with visual reasoning and understanding is nontrivial, and the road to achieving this goal will likely not be straightforward.

Given the current probability of 84.94%, I generally agree that AI will make significant progress in common sense by the end of 2024. However, the challenge is difficult enough that I feel it wouldn't be prudent to place a large bet on this market. I will thus place a small bet in support of the AI advancements but limit the risk.

20

predicts NO

Another, more cross-sectional market:

predicts NO

Another:

predicts NO

Another relevant market:

predicts NO

See my other relevant market:

What is the range of human scores?

predicts NO

@MartinRandall I don't understand your question.

@PatrickDelaney what scores do humans get on this benchmark?

bought Ṁ10 of YES

@MartinRandall The paper introducing the question set is here: https://arxiv.org/pdf/2206.01718.pdf

But I don’t see a human benchmark in that paper

@JimHays If there was a human benchmark I would make a market on whether humans will have significantly more common sense by end of 2024.

predicts NO

@MartinRandall the benchmark is, grab a bench, any old bench will do, make a mark on it with a big permanent marker on one end. Now turn the bench vertically so the mark is above your head. Now, get a friend to drop a brick from that height at the benchmark onto your head (no helmet or hat or anything of course). If you completed that task, you did not pass the human common sense benchmark.

@PatrickDelaney I strongly advise against attempting this experiment, as it poses a serious risk of injury or even death. Dropping a brick from a significant height can generate a tremendous amount of force, which can cause severe harm to the human body.

The impact of the brick on your head can cause a skull fracture, a brain injury, or other serious head injuries. Depending on the height of the drop and the weight of the brick, the force of impact can be lethal.

Furthermore, the experiment is entirely unnecessary and serves no scientific or practical purpose. There are far safer and more ethical ways to conduct experiments and collect data.

In short, attempting this experiment is extremely dangerous, and I strongly advise against it.

predicts NO

@MartinRandall In hospital...you were right, sorry I should have waited for your reply before going forward.

More related questions