In which year will a majority of AI researchers concur that a superintelligent, fairly general AI has been realized?
118
1.1K
10K
2041
34%
Some year after 2040.
7%
AGI will end humanity before such a consensus will be reached
7%
2030
7%
2029
5%
2028
4%
2031
4%
2025
4%
2027
4%
2026
3%
2024
3%
2032
3%
2033
2%
2034
1.9%
2037
1.8%
2035
1.8%
2036
1.7%
2038
1.7%
2040
1.7%
2039
1.4%
Such a consensus will never be reached because such an AI is not possible to create

Superintelligent AGI: An AI system surpassing majority human intellect in most or all fields, capable of understanding, learning, and applying knowledge across various tasks beyond single-domain specialization.

'Majority of AI researchers' refers to 51% or more of active, globally published AI researchers recognized by reputable sources such as established research organizations, academic journals, or comprehensive surveys. The market resolves when such a survey points to a year (even retrospectively) where most AI researchers agreed on the existence of AGI. Non-reputable or small-scale surveys will be disregarded.

If the survey indicates AGI was realized before the market's inception, the market resolves as N/A.

Get Ṁ1,000 play money
Sort by:

If most people here are betting around the end of the decade, that implies that superintelligence will likely have been achieved several years before that, given the "moving of the goalposts" that's been going on in this topic. That's right in line with Altman's deadline of solving alignment by 2027.

bought Ṁ500 of 2026 NO

This is a good market except for the last option, which has no financial incentive behind it. Anyone economically-rational who believes that superintelligence is likely to wipe out humanity will put their mana into the "after 2040" option instead, since that's where the majority of worlds where we solve alignment lie.

@IsaacKing Feedback taken!

Superintelligence is consensually a subset of AGI so your question is very confusing. If we were to choose between both, I think a question about the acknowledgement of AGI would be better because easier to define and also given the denial about GPT-4 being one. Also, the singularity and potential undesirability of superintelligence make related questions annoying.

@SamuelNIHOUL I don't think superintelligence is necessarily a subset of AGI and needs to happen before. Why do you think so?

@firstuserhere It happens after. I think prefixing 'superintelligence' to 'AGI' is useless and your definition of 'superintelligent AGI' reflects it: you gave the definition of AGI that it most consensual.

@SamuelNIHOUL I separate out superintelligent and a fairly general AI. I think a human child is a general agent though not very intelligent if we freeze it in that state

@SamuelNIHOUL Superintelligence usually refers to AGI that is order of magnitudes smarter than humans. It may thus also show signs or trends of getting out of any control (self improvement, self replication etc)

@firstuserhere I agree. Most people don't. No-one is calling GPT-4 AGI yet. Which I think is unfortunate or downright mistaken.

@SamuelNIHOUL I think you should stick to commonly agreed definitions though because they make sense. A system that is smarter than any human at anything is still similar in intelligence to humans a priori. It's basically a human with many lifetimes of experience if we look at things like GPT-4.

@SamuelNIHOUL this is why many people still call it dumb while a good chunk of others do realize that given decent multimodality and training, it is indeed better at anything than anyone.

@SamuelNIHOUL FWIW, I agree with @firstuserhere, as many people want AGI to have natural language abilities, consciousness, the ability to fold laundry, etc. A superintelligence just has to be smart enough to have more control over the world than us, and thus beat us in world control. An AI that can master protein folding and DNA manipulation, but can't do language, is smarter and more powerful than us while clearly not an AGI.

@Duncn I would say don't conflate power and intelligence too much but I see what you mean. By the way maybe I should highlight how we already have a ton of 'superinteligent' narrow AIs that are extremely powerful (and even impactful) such as YouTube recommendation algorithm or Aladdin, Blackrock's trading IA.

@SamuelNIHOUL Yes; technically a scientific calculator is 'superintelligent" in an narrow domain, and in a not-very-AI way.

@Duncn Ah well there's this phenomena that states that new powerful programs are only labelled as AI until they are deployed to production and massively used.

@SamuelNIHOUL If you're talking about GPT-4, I use it all the time, and it is 100% is not better than me at everything.

I would like to bet on the fact that humanity will never create superintelligent AI because I think it's mispriced at 1% but I don't want to bet on the idea that it's theorically impossible and will be recognized as such. I want to include the scenario in which it's possible but we never succeed. Do we agree that the "Such a consensus will never be reached because such an AI is not possible to create" include that scenario as well or should another market be created ?

@GuillaumeTonnel yes it includes that scenario as well

Some year after 2040.

Such a consensus will never be reached because such an AI is not possible to create

Asking for posterity. How will the market differentiate between these two options at resolution if the market doesn't resolve before 2041? I suggest that a proof that AGI is impossible might be the resolution trigger for the no consensus option. But I am not sure how subjective or provable this might be. Never is always a tricky resolution option when the market needs to resolve in finite time.

AGI will end humanity before such a consensus will be reached

Does anyone know what influence adding this last option to the market provides to us human traders? (I don't want to do the math just yet) A potential bequeath to the bots that remain after the human traders are gone especially since any time you buy NO somewhere else you are picking up YES in this last option.

@ShitakiIntaki you don't have to wait until market resolution to profit, you can, say, predict that other people will move the "AGI end humanity" to 10%, and profit from that

bought Ṁ50 of AGI will end humanit... YES

The description

practically most or every field of interest (possibly measured as economic value added)

to me sounds more like regular old AGI than like superintelligent AI. Not that I'd know of some good, widely accepted definitions, just wanted to point out that the phrasing is uncommon, which might render resolution even harder.

@Primer honestly, I don't see myself ever resolving the market, but sure, if someone has a better AGI definition, I'd be happy to consider it. I tend to go with the OpenAI one since it makes the most practical sense

artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work

@firstuserhere I think that's a reasonable description of AGI.

But when I hear "superintelligent", I imagine: Better than top human performance on all tasks (playing Go, raising a child, motivating people, winning a song contest, coming up with philosophical questions, making people feel comfortable, mining Bitcoin, ...). I suggest to just drop the "superintelligent".

@Primer Alright, fair enough. Just to be clear, for this market, I'm going with the OpenAI definition because I consider humans to be "generally intelligent agents" and humans are fairly limited in the generality vs speciality they display (for example being >average at math, chess, cooking, literature, etc all at once gets increasingly rarer and an AI that is better than majority of the humans at almost all of those tasks sounds super-intelligent to me)

Edit: Thats why I clarify what I mean by superintelligence in the description, but thanks for the suggestion, I will edit it to be clearer

@firstuserhere Just as a data point, when I hear superintelligent, I think something very different from Primer -- something more along the lines of "smarter than humans at most cognitive tasks, very very good at some key tasks, and can afford to be indifferent too anything it can't do" -- e.g., an AI that can successfully manipulate an election and set up a profitable fusion reactor business but can't understand what music is about totes counts as superintelligence. As such, I am very happy with the 'superintelligence' in the title, but wouldn't complain if it were dropped.

@Duncn nice phrasing with it being able to be "indifferent", that's better than what I said.

From how I initially understood the question though, it would resolve to the year a majority of researchers answers "yes" to the question "Do we have AGI right now?". But I'm still not really sure.

Honest question: If the superintelligent AI you described actually existed out there: Do you think a majority of AI researchers would then answer yes to "Have we reached AGI?"?

@Primer I think so; I'm fairly certain that as the idea of AI-as-an-agent settles in we'll get better with the idea of "Asperger's AI" -- there's no need or want for a neurotypical AI, not on the side of humans and not on the side of AI. A chatbot that can do small talk is sufficient to do most customer service, and if we also get therapist and poet AI, that's great, but probably not required for consensus on AGI. We're already way above GI on math, memory, processing speed, and etc. The questions are going to be on things like 'creativity' and 'empathy'; some will say these are met by 'unique outputs' and the 'ability to model agents', and others will go a bit spiritual on the issue. I predict that most AI researchers are more on the analytical/practical side that the spiritual/emotive side. But I could be wrong.

@Duncn Thanks for sharing your thoughts. Let's hope you're right! I find it very hard to predict when and how fast the overton window shifts. Might also depend a lot on how global the survey would be.

Comment hidden

More related questions