When will AI be better than humans at AI research? (Basically AGI)
24
231
1.4K
2101
31%
Before 2030
51%
Before 2035
70%
Before 2040
79%
Before 2050
82%
Before 2070
83%
Before 2100

When will there be an AI which is better at doing AI research than the average human AI researcher not using AI?

The AI must be capable of doing everything that a current AI researcher does, including coming up with new research ideas, brainstorming with coworkers, writing code, debugging, doing code reviews, communicating results, and writing papers.

If this is constrained to a specific domain of AI research such as LLM development or interpretability that still counts.

This question is meant to be another version of "When will we get text AGI / transformative AI"

All answers which are true resolve Yes.

A question which is conditional on this one:

Get Ṁ200 play money
Sort by:
bought Ṁ35 Before 2100 YES

I think requiring AIs to do brainstorming is a bit pointless, since brainstorming is uniquely human way of coming up with ideas. Maybe it would be better to just judge them on their output.

I.e. you tell an AI "Please generate a better AI algorithm", it thinks for a while and spits out an implementation and a paper that are better than state of the art. I would definitely call this "better than humans at AI research", but it wouldn't fit the detailed criteria of the question.

Related:

sold Ṁ23 of Before 2030 NO

Due to the subjective resolution criteria, I’ve sold my positions and will not further bet on this market.

bought Ṁ10 of Before 2100 YES
sold Ṁ2 of Before 2035 YES

Can you operationalize AI research? For example, does it suffice to have a task-specific model that can improve language models faster than humans can, or does this include all the different types of AI research? Is interpretability part of AI research?

sold Ṁ0 of Before 2040 YES

@NoaNabeshima I mean a model that is capable of doing everything that a current AI researcher does, including coming up with new research ideas, brainstorming with coworkers, writing code, debugging, doing code reviews, communicating results, and writing papers.

If this is constrained to a specific domain of AI research such as LLM development or interpretability that still counts.

It's basically equivalent to "When will we get text AGI / transformative AI"

More related questions