By 2030, will large language models still be at the peak of AI? [DRAFT]
5
66
130
2030
30%
chance

TODO for more exact resolution criteria, but overall it will probably end up being resolved according to my subjective judgement.

Get Ṁ200 play money
Sort by:

Interesting. Of course LLMs today aren't at the peak of playing Go, or recognizing images. In what sense are they at the peak? I guess they are most general?

The most general, and also to me they feel like the most useful (though maybe in some sense image recognition is more useful due to allowing physical documents to be entered into computers at scale?).

The resolution criteria should probably focus on the application to AI alignment. Like right now some people are suggesting that AI alignment should focus on aligning LLMs since they seem to be the future of AI. I'm pretty skeptical of this because I feel like any day now they're gonna run into the inherent limits of LLMs and start looking for ways to go beyond. But admittedly it's pretty hard to argue for this empirically because right now LLMs do seem to be the frontier of alignment-worthy stuff. What I'm thinking is that if I'm right then in a few years they'll be overtaken by some other technology and everyone will agree that "aligning LLMs is the future" was wrong.

More related questions