Eliezer Yudkowsky retweeted @deredleritt3r's summary of the state of AI research:
You don't truly understand the magnitude of the potential impact of powerful AI on the world unless you are aware, and have fully internalized, that senior leadership and most researchers at the frontier labs actually believe the following:
Existing AI is already significantly speeding up AI research. Very soon (this year), AI will very likely take over ALL aspects of AI research other than generation of novel research ideas. Soon (within the next 2 years), AI will very likely take over ALL aspects of AI research, period. This means hundreds of thousands of GPUs working 24/7 to discover novel ideas at the level of, or better than, the likes of Alec Radford, Ilya Sutskever, etc.
[...]
The trajectory has been ~the same as that publicly predicted by the frontier labs. We have been accelerating. And, as of right now, all signs are indicating that the acceleration shall continue and that full automation of AI research and, potentially, RSI are firmly on the horizon.
These seem like reasonably testable predictions/claims:
Very soon (this year), AI will very likely take over ALL aspects of AI research other than generation of novel research ideas.
Soon (within the next 2 years), AI will very likely take over ALL aspects of AI research, period.
How likely is "very likely"? Let's find out.
The first claim is more subjective than the second; since it might rely on my judgement of the state of play, I won't bet on this market. I'll look at my own experience as a programmer, formal and informal testimony of AI researchers, surveys, etc.