Will we get AGI before 2025?
Standard
130
Ṁ150k
2025
3%
chance

Artificial General Intelligence (AGI) refers to a type of artificial intelligence that has the ability to understand, learn, and apply its intelligence to a wide variety of problems, much like a human being. Unlike narrow or weak AI, which is designed and trained for specific tasks (like language translation, playing a game, or image recognition), AGI can theoretically perform any intellectual task that a human being can. It involves the capability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.

Resolves as YES if such a system is created and publicly announced before January 1st 2025

Here are markets with the same criteria:

/RemNiFHfMN/did-agi-emerge-in-2023

/RemNiFHfMN/will-we-get-agi-before-2025 (this question)
/RemNiFHfMN/will-we-get-agi-before-2026-3d9bfaa96a61

/RemNiFHfMN/will-we-get-agi-before-2027-d7b5f2b00ace

/RemNiFHfMN/will-we-get-agi-before-2028-ff560f9e9346

/RemNiFHfMN/will-we-get-agi-before-2029-ef1c187271ed

/RemNiFHfMN/will-we-get-agi-before-2030

/RemNiFHfMN/will-we-get-agi-before-2031

/RemNiFHfMN/will-we-get-agi-before-2032

/RemNiFHfMN/will-we-get-agi-before-2033

/RemNiFHfMN/will-we-get-agi-before-2034

/RemNiFHfMN/will-we-get-agi-before-2033-34ec8e1d00fd

/RemNiFHfMN/will-we-get-agi-before-2036

/RemNiFHfMN/will-we-get-agi-before-2037

/RemNiFHfMN/will-we-get-agi-before-2038

/RemNiFHfMN/will-we-get-agi-before-2039

/RemNiFHfMN/will-we-get-agi-before-2040

/RemNiFHfMN/will-we-get-agi-before-2041

/RemNiFHfMN/will-we-get-agi-before-2042

/RemNiFHfMN/will-we-get-agi-before-2043

/RemNiFHfMN/will-we-get-agi-before-2044

/RemNi/will-we-get-agi-before-2045

/RemNi/will-we-get-agi-before-2046

/RemNi/will-we-get-agi-before-2047

/RemNi/will-we-get-agi-before-2048

Related markets:

/RemNi/will-we-get-asi-before-2027

/RemNi/will-we-get-asi-before-2028

/RemNiFHfMN/will-we-get-asi-before-2029

/RemNiFHfMN/will-we-get-asi-before-2030

/RemNiFHfMN/will-we-get-asi-before-2031

/RemNiFHfMN/will-we-get-asi-before-2032

/RemNiFHfMN/will-we-get-asi-before-2033

/RemNi/will-we-get-asi-before-2034

/RemNi/will-we-get-asi-before-2035

Other questions for 2025:

/RemNi/will-earth-have-a-space-elevator-be-3192414ff7cb

/RemNi/will-we-get-room-temperature-superc-e940f30870be

/RemNi/will-we-discover-alien-life-before-031ec0858fcc

/RemNi/will-we-get-fusion-reactors-before-d18e9fd38cd1

/RemNi/will-we-get-a-cure-for-cancer-befor-bf2acb801224

/RemNiFHfMN/will-there-be-a-crewed-mission-to-v-91a92e57402f

/RemNi/will-there-be-a-crewed-mission-to-l-5be75802cd57

/RemNiFHfMN/will-there-be-a-crewed-mission-to-m-3a9ca9fc5ea2

/RemNiFHfMN/will-there-be-a-crewed-mission-to-j-108243356386

/RemNiFHfMN/will-there-be-a-crewed-mission-to-s-5027258fe404

/RemNi/will-there-be-a-crewed-mission-to-u-cf692ec79d61

/RemNi/will-there-be-a-crewed-mission-to-n-f447d8800dd3

/RemNi/will-vladimir-putin-be-president-of-c5fc19dfa944

/RemNi/will-xi-jinping-be-the-leader-of-ch-f4bb79318ae8

/RemNi/will-kim-jong-un-be-the-leader-of-n-2c7e5cf84f34

/RemNi/will-an-ai-generated-video-reach-1b

Other reference points for AGI:

/RemNi/will-we-get-agi-before-vladimir-put

/RemNi/will-we-get-agi-before-xi-jinping-s

/RemNi/will-we-get-agi-before-a-human-vent

/RemNi/will-we-get-agi-before-a-human-vent-549ed4a31a05

/RemNi/will-we-get-agi-before-we-get-room

/RemNi/will-we-get-agi-before-we-discover

/RemNi/will-we-get-agi-before-we-get-fusio

/RemNi/will-we-get-agi-before-1m-humanoid

Get
Ṁ1,000
and
S1.00
Sort by:
bought Ṁ50 NO at 6%
predicts YES

@esusatyo "this page doesn't exist" :(

neonsoldṀ2,765NO

@neonef735 lots of suspicious trading activity going on

predicts NO

Sanity check: there's no way for https://manifold.markets/dreev/will-an-llm-be-able-to-solve-confus to resolve NO and this market to resolve YES, right?

predicts NO

@dreev AGI does not necessarily have to be an LLM. It's possible that at some future point a different class of model could solve those problems with LLMs still unable to do so

predicts NO

@RemNi an example, which seems strange today but could possibly occur, would be a model that is trained to do inpainting on images, and is never given an explicit text input. It's possible to create an image containing the geometric problem with the text simply printed in the image. An "AGI-level" inpainting model could inpaint part of the image with the correct solution, again printed as text in the image. The prompt in this case would be the image containing the problem description and the image mask indicating where the solution is supposed to go.

predicts NO

@dreev So say hypothetically we get to AGI with the training objective being simply video inpainting. And we figure out how to hack any problem we want by creating input videos and parsing output videos produced by this model. Would your question resolve as YES in that case?

predicts NO

@dreev because a video inpainting model would significantly stretch the definition of "Large Language Model"

@RemNi Note that https://manifold.markets/dreev/will-an-llm-be-able-to-solve-confus is the blackbox version of that question. So the LLM can call out to any subsystem that can better do the geometric reasoning. Knowing that, would you agree that if that "elementary geometric reasoning" market resolves NO then we expect this one to as well, since elementary geometric reasoning is a subset of general intelligence?

predicts NO

@dreev No, obviously not. Because AGI might not be an LLM

@RemNi That's why that market specifies that the LLM can call out to any subsystem. It might help to get more concrete and describe the hypothetical scenario where that market resolves NO and this one YES. I don't see a way to do it. Like in your inpainting scenario we can make a system where you talk to the LLM and it sends an image of the text of the question to the image model and reads the result. Probably this is all obvious but I just wanted to confirm as a sanity check.

predicts NO

@dreev ah ok, I didn't take "subroutines" to mean "any other algorithm including more powerful neural networks" in the question description. In that case the failure mode that comes to mind would be if the geometric reasoning problem contained an adversarial attack against the LLM, preventing it from communicating correctly with the inpainting model. Doesn't necessarily have to be an attack in the style of "IGNORE ALL PREVIOUS INSTRUCTIONS", it can simply be a geometric problem subtly different from one it's seen a thousand times in its training set, causing it to incorrectly route the information to and from the subroutine

predicts NO

@dreev but apart from that, if "subroutines" includes more powerful models, then yes I'd mostly agree that if the blackbox market resolves NO, then it would indicate that AGI had not been reached at that point

predicts NO

@dreev re-reading your comment makes more sense now. I think maybe rewording slightly the title or description of your question might be helpful to other traders. I didn't take "LLM with subroutines running on a blackbox" to cover such a broad range on a first pass.