When will AI be better than humans at AI research? (Transformative AI)
Basic
37
3.0k
2101
50%
Before 2030
78%
Before 2035
86%
Before 2040
89%
Before 2050
91%
Before 2070
93%
Before 2100

When will there be an AI which is better at doing AI research than the average human AI researcher not using AI?

The AI must be capable of doing everything that a current AI researcher does, including coming up with new research ideas, brainstorming with coworkers, writing code, debugging, doing code reviews, communicating results, and writing papers.

If this is constrained to a specific domain of AI research such as LLM development or interpretability that still counts.

This question is meant to be another version of "When will we get text AGI / transformative AI"

All answers which are true resolve Yes.

A question which is conditional on this one:

Get Ṁ600 play money
Sort by:
bought Ṁ35 Before 2100 YES

I think requiring AIs to do brainstorming is a bit pointless, since brainstorming is uniquely human way of coming up with ideas. Maybe it would be better to just judge them on their output.

I.e. you tell an AI "Please generate a better AI algorithm", it thinks for a while and spits out an implementation and a paper that are better than state of the art. I would definitely call this "better than humans at AI research", but it wouldn't fit the detailed criteria of the question.

Related:

Due to the subjective resolution criteria, I’ve sold my positions and will not further bet on this market.

Can you operationalize AI research? For example, does it suffice to have a task-specific model that can improve language models faster than humans can, or does this include all the different types of AI research? Is interpretability part of AI research?

@NoaNabeshima I mean a model that is capable of doing everything that a current AI researcher does, including coming up with new research ideas, brainstorming with coworkers, writing code, debugging, doing code reviews, communicating results, and writing papers.

If this is constrained to a specific domain of AI research such as LLM development or interpretability that still counts.

It's basically equivalent to "When will we get text AGI / transformative AI"