In which year will a majority of AI researchers concur that a superintelligent, fairly general AI has been realized?
136
1.3K
13K
2041
0.7%
2023
2%
2024
3%
2025
3%
2026
3%
2027
4%
2028
6%
2029
6%
2030
4%
2031
3%
2032
3%
2033
2%
2034
1.5%
2035
1.5%
2036
1.6%
2037
1.5%
2038
1.4%
2039
1.4%
2040
37%
Some year after 2040.
4%
Such a consensus will never be reached because such an AI is not possible to create

Superintelligent AGI: An AI system surpassing majority human intellect in most or all fields, capable of understanding, learning, and applying knowledge across various tasks beyond single-domain specialization.

'Majority of AI researchers' refers to 51% or more of active, globally published AI researchers recognized by reputable sources such as established research organizations, academic journals, or comprehensive surveys. The market resolves when such a survey points to a year (even retrospectively) where most AI researchers agreed on the existence of AGI. Non-reputable or small-scale surveys will be disregarded.

If the survey indicates AGI was realized before the market's inception, the market resolves as N/A.

Get Ṁ200 play money
Sort by:

I'd recommend changing the default sort to Old or something, so the years are ordered

bought Ṁ5 AGI will end humanit... NO

This market really shows that there's not that many people who predict AGI before 2030 or anything even remotely close to it.

@Seeker I think it also shows that we are really good at moving the goalposts

If most people here are betting around the end of the decade, that implies that superintelligence will likely have been achieved several years before that, given the "moving of the goalposts" that's been going on in this topic. That's right in line with Altman's deadline of solving alignment by 2027.

bought Ṁ500 of 2026 NO

This is a good market except for the last option, which has no financial incentive behind it. Anyone economically-rational who believes that superintelligence is likely to wipe out humanity will put their mana into the "after 2040" option instead, since that's where the majority of worlds where we solve alignment lie.

@IsaacKing Feedback taken!

Superintelligence is consensually a subset of AGI so your question is very confusing. If we were to choose between both, I think a question about the acknowledgement of AGI would be better because easier to define and also given the denial about GPT-4 being one. Also, the singularity and potential undesirability of superintelligence make related questions annoying.

@SamuelNIHOUL I don't think superintelligence is necessarily a subset of AGI and needs to happen before. Why do you think so?

@firstuserhere It happens after. I think prefixing 'superintelligence' to 'AGI' is useless and your definition of 'superintelligent AGI' reflects it: you gave the definition of AGI that it most consensual.

@SamuelNIHOUL I separate out superintelligent and a fairly general AI. I think a human child is a general agent though not very intelligent if we freeze it in that state

@SamuelNIHOUL Superintelligence usually refers to AGI that is order of magnitudes smarter than humans. It may thus also show signs or trends of getting out of any control (self improvement, self replication etc)

@firstuserhere I agree. Most people don't. No-one is calling GPT-4 AGI yet. Which I think is unfortunate or downright mistaken.

@SamuelNIHOUL I think you should stick to commonly agreed definitions though because they make sense. A system that is smarter than any human at anything is still similar in intelligence to humans a priori. It's basically a human with many lifetimes of experience if we look at things like GPT-4.

@SamuelNIHOUL this is why many people still call it dumb while a good chunk of others do realize that given decent multimodality and training, it is indeed better at anything than anyone.

@SamuelNIHOUL FWIW, I agree with @firstuserhere, as many people want AGI to have natural language abilities, consciousness, the ability to fold laundry, etc. A superintelligence just has to be smart enough to have more control over the world than us, and thus beat us in world control. An AI that can master protein folding and DNA manipulation, but can't do language, is smarter and more powerful than us while clearly not an AGI.

@Duncn I would say don't conflate power and intelligence too much but I see what you mean. By the way maybe I should highlight how we already have a ton of 'superinteligent' narrow AIs that are extremely powerful (and even impactful) such as YouTube recommendation algorithm or Aladdin, Blackrock's trading IA.

@SamuelNIHOUL Yes; technically a scientific calculator is 'superintelligent" in an narrow domain, and in a not-very-AI way.

@Duncn Ah well there's this phenomena that states that new powerful programs are only labelled as AI until they are deployed to production and massively used.

@SamuelNIHOUL If you're talking about GPT-4, I use it all the time, and it is 100% is not better than me at everything.

I would like to bet on the fact that humanity will never create superintelligent AI because I think it's mispriced at 1% but I don't want to bet on the idea that it's theorically impossible and will be recognized as such. I want to include the scenario in which it's possible but we never succeed. Do we agree that the "Such a consensus will never be reached because such an AI is not possible to create" include that scenario as well or should another market be created ?

@GuillaumeTonnel yes it includes that scenario as well

Some year after 2040.

Such a consensus will never be reached because such an AI is not possible to create

Asking for posterity. How will the market differentiate between these two options at resolution if the market doesn't resolve before 2041? I suggest that a proof that AGI is impossible might be the resolution trigger for the no consensus option. But I am not sure how subjective or provable this might be. Never is always a tricky resolution option when the market needs to resolve in finite time.

AGI will end humanity before such a consensus will be reached

Does anyone know what influence adding this last option to the market provides to us human traders? (I don't want to do the math just yet) A potential bequeath to the bots that remain after the human traders are gone especially since any time you buy NO somewhere else you are picking up YES in this last option.

@ShitakiIntaki you don't have to wait until market resolution to profit, you can, say, predict that other people will move the "AGI end humanity" to 10%, and profit from that

Comment hidden

More related questions