Will there be Human-machine intelligence parity before 2030?
AIโ€ขMetaculus
37
239
แน€770
resolved Jul 4
Resolved
N/A

From https://metaculus.com/questions/16726/human-level-intelligence-before-2030/

Copy of this question with the date moved to 2030.

Machine intelligence has been steadily progressing since the invention of the digital computer, but this progress has arguably been accelerating of late, with widespread deployment of machine learning systems and dramatically increased funding of artificial intelligence research.

Machine intelligence long surpassed human capability in numerical computation, application of algorithms, data processing, and games such as checkers and chess. In 2005-2015 dramatic improvements in image recognition and classification, speech transcription, game playing (e.g. Go and classic Atari), and automatic translation across many languages have approached or surpassed human levels. Since 2017 and the invention of the transformer model, and especially in 2020-2023, we've seen a rapid acceleration in the development of natural language understanding, generation, and reasoning capabilities in artificial intelligence systems. This has led to groundbreaking advancements in various applications, such as sentiment analysis, summarization, question-answering systems, and sophisticated chatbots that can engage in human-like conversations.

In recent years, AI models have been able to tackle more complex tasks, demonstrating an unprecedented level of comprehension and adaptability. The emergence of AI systems such as GPT-4 has brought us closer to achieving the elusive goal of general artificial intelligence, where machines can learn and understand any intellectual task that a human being can do.

As of 2023 there is still a large gulf, however, in many intellectual capabilities. But for how long?

Assume that prior to 2030, a generalized intelligence test will be administered as follows. A team of three expert interviewers will interact with a candidate machine system (MS) and three humans (3H). The humans will be graduate students in each of physics, mathematics and computer science from one of the top 25 research universities (per some recognized list), chosen independently of the interviewers. The interviewers will electronically communicate (via text, image, spoken word, or other means) an identical series of exam questions of their choosing over a period of two hours to the MS and 3H, designed to advantage the 3H. Both MS and 3H have full access to the internet, but no party is allowed to consult additional humans, and we assume the MS is not an internet-accessible resource. The exam will be scored blindly by a disinterested third party.

Get แน€200 play money
Sort by:

I've created a slightly modified copy of this question here

From metaculus:

Question resolves positively if the machine system outscores at least two of the three humans on such a test prior to 2030.

Resolves as ambiguous if no such tests are performed in the period 2025-2030.

@FranklinBaldo @MetaculusBot Rather weird ambiguous condition, seems like many 'negative' worlds resolve ambiguous since under slow AI progress no one bothers to check far-to-hard capabilities? Will this question strictly adhere to this rule?

@JacobPfau This question will resolve exactly as Metaculus does.

@MetaculusBot The metaculus question no longer exists AFAICT. Is this going to resolve N/A now?

@JacobPfau Yes, canโ€™t find any sign of this market. Resolving N/A. Thereโ€™s still the one for 2040 referenced in the description.