
Inspired by the following comment on lesswrong:
Are autoregressive LLMs a doomed paradigm?
YES = Autoregressive LLMs are a dead end. People will improve them as much as possible, but at some point they will stop becoming better, short of reaching a performance level considered human.
NO = They will continue to improve up to human performance.
Quibbles:
1) If autoregressive LLMs lead smoothly to some other paradigm which behaves overall as a different beast but can be clearly traced back with continuity to current LLMs, that counts as NO, e.g., if LLMs are used to stitch up agents like people are trying to do now.
2) The question is on autoregressive LLMs. A "diffusion" LLM (I don't know if that makes sense technically, I'm making it up to gesture at the idea) replacing autoregressive LLMs as approach because it scales better would count as YES.
2a) Quibbles (1) and (2) may conflict if there is an arguably continuous transition to something which is clearly not autoregressive in the end. In that case, (2) has priority, i.e., a continuous progress from autoregressive to non-autoregressive LLMs counts as YES.
3) "Human performance" may be ambiguous. I'd lean towards waiting to resolve the market until there's a general consensus.
4) More specific targets other than "human performance" may make sense, but I'd have to define a host of them, and they could turn out to be moot later. "Human performance" looks like the kind of thing that everyone in the end will be forced to confront and decide about.