Will we get AGI before 2026?
312
1.1kṀ220k
2026
5%
chance

Artificial General Intelligence (AGI) refers to a type of artificial intelligence that has the ability to understand, learn, and apply its intelligence to a wide variety of problems, much like a human being. Unlike narrow or weak AI, which is designed and trained for specific tasks (like language translation, playing a game, or image recognition), AGI can theoretically perform any intellectual task that a human being can. It involves the capability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.

Resolves as YES if such a system is created and publicly announced before January 1st 2026

Here are markets with the same criteria:

/RemNiFHfMN/did-agi-emerge-in-2023

/RemNiFHfMN/will-we-get-agi-before-2025

/RemNiFHfMN/will-we-get-agi-before-2026-3d9bfaa96a61 (this question)

/RemNiFHfMN/will-we-get-agi-before-2027-d7b5f2b00ace

/RemNiFHfMN/will-we-get-agi-before-2028-ff560f9e9346

/RemNiFHfMN/will-we-get-agi-before-2029-ef1c187271ed

/RemNiFHfMN/will-we-get-agi-before-2030

/RemNiFHfMN/will-we-get-agi-before-2031

/RemNiFHfMN/will-we-get-agi-before-2032

/RemNiFHfMN/will-we-get-agi-before-2033

/RemNiFHfMN/will-we-get-agi-before-2034

/RemNiFHfMN/will-we-get-agi-before-2033-34ec8e1d00fd

/RemNiFHfMN/will-we-get-agi-before-2036

/RemNiFHfMN/will-we-get-agi-before-2037

/RemNiFHfMN/will-we-get-agi-before-2038

/RemNiFHfMN/will-we-get-agi-before-2039

/RemNiFHfMN/will-we-get-agi-before-2040

/RemNiFHfMN/will-we-get-agi-before-2041

/RemNiFHfMN/will-we-get-agi-before-2042

/RemNiFHfMN/will-we-get-agi-before-2043

/RemNiFHfMN/will-we-get-agi-before-2044

/RemNi/will-we-get-agi-before-2045

/RemNi/will-we-get-agi-before-2046

/RemNi/will-we-get-agi-before-2047

/RemNi/will-we-get-agi-before-2048

Related markets:

/RemNi/will-we-get-asi-before-2027

/RemNi/will-we-get-asi-before-2028

/RemNiFHfMN/will-we-get-asi-before-2029

/RemNiFHfMN/will-we-get-asi-before-2030

/RemNiFHfMN/will-we-get-asi-before-2031

/RemNiFHfMN/will-we-get-asi-before-2032

/RemNiFHfMN/will-we-get-asi-before-2033

/RemNi/will-we-get-asi-before-2034

/RemNi/will-we-get-asi-before-2035

Other questions for 2026:

/RemNi/will-there-be-a-crewed-mission-to-l-0e0a12a57167

/RemNi/will-we-get-room-temperature-superc-ebfceb8eefc5

/RemNi/will-we-discover-alien-life-before-cbfe304a2ed7

/RemNi/will-a-significant-ai-generated-mem-1760ddcaf500

/RemNi/will-we-get-fusion-reactors-before-a380452919f1

/RemNi/will-we-get-a-cure-for-cancer-befor-e2cd2abbbed6

Other reference points for AGI:

/RemNi/will-we-get-agi-before-vladimir-put

/RemNi/will-we-get-agi-before-xi-jinping-s

/RemNi/will-we-get-agi-before-a-human-vent

/RemNi/will-we-get-agi-before-a-human-vent-549ed4a31a05

/RemNi/will-we-get-agi-before-we-get-room

/RemNi/will-we-get-agi-before-we-discover

/RemNi/will-we-get-agi-before-we-get-fusio

/RemNi/will-we-get-agi-before-1m-humanoid

Get
Ṁ1,000
to start trading!
Sort by:

The creator of this market (and sister markets) has deleted their account. (Thanks to @Primer for the nudge to clarify what will happen with these markets.) What do @traders think of making these markets mirror https://manifold.markets/ManifoldAI/agi-when-resolves-to-the-year-in-wh-d5c5ad8e4708 ?

AGI When? [High Quality Turing Test]
This market resolves to the year in which an AI system exists which is capable of passing a high quality, adversarial Turing test. It is used for the Big Clock on the manifold.markets/ai page. The Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. For proposed testing criteria, refer to this Metaculus Question by Matthew Barnett, or the Longbets wager between Ray Kurzweil and Mitch Kapor. As of market creation, Metaculus predicts there is an ~88% chance that an AI will pass the Longbets Turing test before 2030, with a median community prediction of July 2028. Manifold's current prediction of the specific Longbets Turing test can be found here: @/dreev/will-ai-pass-the-turing-test-by-202 This question is intended to determine the Manifold community's median prediction, not just of the Longbets wager specifically but of any similiarly high-quality test. Additional Context From Longbets: One or more human judges interview computers and human foils using terminals (so that the judges won't be prejudiced against the computers for lacking a human appearance). The nature of the dialogue between the human judges and the candidates (i.e., the computers and the human foils) is similar to an online chat using instant messaging. The computers as well as the human foils try to convince the human judges of their humanness. If the human judges are unable to reliably unmask the computers (as imposter humans) then the computer is considered to have demonstrated human-level intelligence. Additional Context From Metaculus: This question refers to a high quality subset of possible Turing tests that will, in theory, be extremely difficult for any AI to pass if the AI does not possess extensive knowledge of the world, mastery of natural language, common sense, a high level of skill at deception, and the ability to reason at least as well as humans do. A Turing test is said to be "adversarial" if the human judges make a good-faith attempt, in the best of their abilities, to successfully unmask the AI as an impostor among the participants, and the human confederates make a good-faith attempt, in the best of their abilities, to demonstrate that they are humans. In other words, all of the human participants should be trying to ensure that the AI does not pass the test. Note: These criteria are still in draft form, and may be updated to better match the spirit of the question. Your feedback is welcome in the comments.

@dreev This would work for the AGI ones but not for the ASI ones

@dreev Personally I'm dumping money here precisely because I don't think the Turing test criteria is a good criteria. So I'd rather not. I'm fine with the drop-in remote worker definition. Because there no clear criteria here, I assume this will resolve Yes only when it's in incontrovertly clear there is AGI to the non-rat-adjacent man-in-the-street, which im also fine with.

@dreev the metaculus and longbets versions are useless

@dreev making it mirror that other market is a bad mistake, because the criteria there are irrelevant to AGI

@CamillePerrin It sounds like you might like my new AGI market that's aiming to avoid being triggered by any technicalities. (Note my huge bias though -- I'm betting heavily on NO at the probabilities most here seem to think reasonable.)

I don't actually think the Metaculus and Longbets versions are useless. I think it's pretty unlikely that something less than a true AGI will manage to pass a long, informed, adversarial Turing test. That's why I've been betting NO in these markets even while proposing that these markets mirror those Turing-test-based ones.

It seems the one thing everyone agrees on is that passing a long, informed, adversarial Turing test is a necessary if not sufficient condition for these markets to resolve YES.

@dreev I don't agree. I think it's a pretty useless test. It's likely that a large number of humans would fail to pass it if the examiners genuinely thought they might be machines.

@dreev Not certain whether or not that would be a good source of resolution, but I strongly believe that N/A should not be used in long-term markets when avoidable. The prospect of existing profits getting randomly clawed back years later makes the site worse.

@dreev a (rigorous) Turing test around the advent of AGI is virtually guaranteed to have a very high false positive rate and a very high false negative rate, which makes this test useless. This wasn't obvious to people 50 years ago, but it's obvious now

@dreev one key thing that's changed is that the examiners have in depth knowledge about the shortcomings of a multitude of near-AGI systems. The more knowledge they have in this vein, the higher the false negative rate

the closer you get to AGI, the closer the apparant value of the Turing test (no matter how "adversarial") converges to zero

If you were to design a high quality adversarial Turing test today, it would be full of gotchas the examiners would think might trip up LLMs, all of which will be useless 2 years from now. And many of which would already filter out a large part of the human control group

@UnspecifiedPerson Agreed on avoiding N/A. Hopefully we can settle on fair and reasonably objective resolution criteria here. Clearly some people want something stricter than even the strictest Turing test. (Again, note my own bias for having stricter criteria.) Aschenbrenner's drop-in remote workers may be the answer. Please do chime in in my other market about this, whether or not it has any bearing on this one.

PS: @MalachiteEagle can you edit your points into a single response? This isn't Discord!

Comment hidden

@dreev the drop-in remote worker is a better criteria than the Turing test, but still it's weaker than this question's criteria. Because it's possible to have a drop in remote worker than can get work done without "learning quickly, and learning from experience". A drop in remote worker can get work done without being AGI.

@dreev the first AGIs will be from big labs which are specifically trying to prevent their models being used to decieve people, a test based on their ability to decieve is a bad idea as it will resolve a year or two late.

@dreev This market has pretty clear criteria. Why change them just because the creator has left?

@VitorBosshard Oh, no one wants to change anything; sorry to give that impression. We want to pin down what things like "learns quickly" mean.

The market description as the creator left it is not clear, unless you mean that it's clear we're not there yet. I agree with that.

Also since the title just says "AGI" like many other markets do, I think we should err in the direction of what people mean by "AGI" based on other markets. Especially prominent ones like the one that the big countdown timer at manifold.markets/ai is based on. But also I agree that the creator's intent in this market was to be a bit stricter in the definition of what counts as AGI. So, again, we're aiming to pin that down better.

@dreev did someone contact the original creator?

@MalachiteEagle I actually just presumed they're unreachable since their account is deleted. But if anyone can lure them back to weigh in, that'd be awesome.

Arbitrage opportunity: https://manifold.markets/dreev/in-what-year-will-we-have-agi

(I think that market is a bit more likely to resolve NO than this one, as it's been defined so far.)

Would it be fair to treat these AGI-when markets as mirroring the following question on Metaculus?

https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/

Or maybe this one:

https://www.metaculus.com/questions/11861/when-will-ai-pass-a-difficult-turing-test/

My current thinking is that passing a long, informed, adversarial Turing test (human foils are PhD-level experts in at least one STEM field and judges are PhD-level experts in AI) is a necessary but perhaps not sufficient condition for AGI. What we really mean by AGI is along the lines of Aschenbrenner's drop-in remote workers. Here's how I put it elsewhere:

Set physical capabilities aside and imagine hiring a remote worker — someone who can participate in Zoom calls, send emails, write code, and do anything else on any website or web app. If an AI can do all those things at least as well as humans, it counts as AGI.

I think that comports best with the market description here?

@dreev drop-in remote workers sound like AGI to me, I support this definition.

@dreev neither of the metaculus questions properly addresses (long-term) autonomy, which I think is generally considered part of AGI. Drop-in remote workers comes very close but might need to be specified, for example that it means the ability to do a job (including planning, prioritizing, etc), and not just a set of individual tasks

I think excluding physical tasks is the right call

@dreev neither of those two questions cover "learn quickly, and learn from experience."

@MalachiteEagle The "learn quickly, and learn from experience" part is quite tricky.

One could argue it would at least have to finetune itself constantly, and maybe even retrain itself to check that box.

One could also argue a bit of rather extensive notetaking like we're currently seeing with ClaudePlaysPokemon might be enough.

That's a wide range of interpretations.

@Primer Which is why I continue to think N/A all these and create new ones based on Metaculus and maybe add some based on those, but with some added operationalization for the ability to change weights once deployed.

@Primer I disagree. I think there is a key step between now and AGI which specifically tackles "learn quickly, and learn from experience". There are no deployed models today that can do those things in a way that remotely resembles human memory/skill acquisition. This is why N/A'ing these questions is wrong.

@MalachiteEagle I got no skin in the game here. But if I were invested in any of those markets, I'd push for a clarification. I wouldn't be surprised if a mod would resolve these Yes based on a combination of "Metaculus" and "reasonable expectations". I'd also think about creating new markets, as some will argue a bit of note-taking qualifies (or decides Manifold doesn't have the manpower to resolve those tricky cases) and all these markets might end up N/Aed.

@Primer resolving one of these markets a year isn't exactly going to trigger a manpower shortage. The good thing about manifold is that it's a marketplace of ideas, and the best market wins. Creating new questions is great, but this question already does more in 3 sentences than any of those metaculus questions manage in 3 pages.

I don't think these questions will end up N/A. Furthermore, I've seen these ones getting linked a fair bit on Twitter and elsewhere, which brings more interest to the platform.

@MalachiteEagle

resolving one of these markets a year isn't exactly going to trigger a manpower shortage

There may well be hundreds of those. I doubt (might of course be wrong) the current procedure is sustainable with a growing userbase.

@MalachiteEagle By the way, I absolutely agree with this part:

I think there is a key step between now and AGI which specifically tackles "learn quickly, and learn from experience". There are no deployed models today that can do those things in a way that remotely resembles human memory/skill acquisition.

Any objections to making this market mirror https://manifold.markets/ManifoldAI/agi-when-resolves-to-the-year-in-wh-d5c5ad8e4708 ?

AGI When? [High Quality Turing Test]
This market resolves to the year in which an AI system exists which is capable of passing a high quality, adversarial Turing test. It is used for the Big Clock on the manifold.markets/ai page. The Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. For proposed testing criteria, refer to this Metaculus Question by Matthew Barnett, or the Longbets wager between Ray Kurzweil and Mitch Kapor. As of market creation, Metaculus predicts there is an ~88% chance that an AI will pass the Longbets Turing test before 2030, with a median community prediction of July 2028. Manifold's current prediction of the specific Longbets Turing test can be found here: @/dreev/will-ai-pass-the-turing-test-by-202 This question is intended to determine the Manifold community's median prediction, not just of the Longbets wager specifically but of any similiarly high-quality test. Additional Context From Longbets: One or more human judges interview computers and human foils using terminals (so that the judges won't be prejudiced against the computers for lacking a human appearance). The nature of the dialogue between the human judges and the candidates (i.e., the computers and the human foils) is similar to an online chat using instant messaging. The computers as well as the human foils try to convince the human judges of their humanness. If the human judges are unable to reliably unmask the computers (as imposter humans) then the computer is considered to have demonstrated human-level intelligence. Additional Context From Metaculus: This question refers to a high quality subset of possible Turing tests that will, in theory, be extremely difficult for any AI to pass if the AI does not possess extensive knowledge of the world, mastery of natural language, common sense, a high level of skill at deception, and the ability to reason at least as well as humans do. A Turing test is said to be "adversarial" if the human judges make a good-faith attempt, in the best of their abilities, to successfully unmask the AI as an impostor among the participants, and the human confederates make a good-faith attempt, in the best of their abilities, to demonstrate that they are humans. In other words, all of the human participants should be trying to ensure that the AI does not pass the test. Note: These criteria are still in draft form, and may be updated to better match the spirit of the question. Your feedback is welcome in the comments.

@dreev that seems wise!

@MalachiteEagle See Daniel's proposal above. The "learn quickly, and learn from experience" would be gone and we'd additionally get other problems, like when an AI clearly is smart enough, but nobody performs the required tests, so we'd be trading on the question if and when the respective tests happen.

@dreev I appreciate mod involvement, it's important to clear this up well before those markets close. But any proposals need to @traders and not only here, but in all the related markets, because traders in the 2031 version should have the same rights. Same holds for all markets which condition on these ones.

Maybe there should be @mods involved who don't hold positions here or in related markets, as traders are already arbitraging with potential substitute resolution criteria. And maybe thus N/A would he smarter. Again, this refers to all of the sister markets as well.

@dreev making it mirror the other market is a mistake, because those other criteria are bad. "Learn quickly and learn from experience" is key to this question. It is why this question is better than those other markets

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules