I recently made a prediction on my Substack: https://substack.com/@philosophybear/note/c-60071121 and I want to know if Manifold thinks I'm right.
On the 26th of June, 2025, I will ask Manifold via poll whether the following prediction by myself has held up over the last twelve months:
"Prediction: In the next twelve months, changes to LLMs that allow them to interact in more ways, such as:
Voice interaction
Anthropic’s artifacts
Agents
Reading webpages
Spreadsheet use
Etc.
Will be greatly more signficiant to the perception of LLM power and usefulness than increases in their intellectual capability.
It will also restructure the way people think about LLM’s- it’s harder to think of something as a next-word generator when it acts in a more extensive sense. That seems to be the way human prejudice about this stuff works (see 4E stuff, which seems to me to reflect these prejudices).
LLM’s are already ‘smart’ enough to do most stuff the average office worker wants to accomplish with a computer. I’ve been an office worker- most of this stuff ain’t hard! But without the capacity to control said computer, talk, etc. LLM’s are ‘stuck’.
This is not to say that further intellectual growth isn’t important- it’s far more important in the long run. Right now though, the major bottleneck to public reception and commercial deployment is available modes of interaction with computers and their users."
If a majority of users say it has held up, this question will resolve yes, else it will resolve no.
I think you might be generally right in the span of a few months but not a year. I have a hard time believing that anyone, whether bear or LLM engineer, will be able to make a prediction that seems really 'spot on' next year because whatever the 'cool new thing' is now (like Artifacts) will be old news June 2025. I think the major bottleneck to commercial application is still hallucination by far, and imo there's still a decent chance that raw intelligence wins over being friendly and conversational, but time will tell!