Will Aidan McLau's claim that very large models are "refusing instruction tuning" be validated by 2030?
Basic
4
Ṁ552030
42%
chance
1D
1W
1M
ALL
https://x.com/aidan_mclau/status/1859444783850156258 According to Aidan McLau the reason large models are not being released is because these models are resisting instruct tuning. Resolves yes if a current or former AI researcher at Google, OpenAI, Anthropic, or Meta validates this claim or it confirmed independently by research.
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Related questions
Related questions
Will a model costing >$30M be intentionally trained to be more mechanistically interpretable by end of 2027? (see desc)
57% chance
Will models be able to do the work of an AI researcher/engineer before 2027?
33% chance
AI: Will someone train a $10T model by 2100?
57% chance
AI: Will someone train a $1T model by 2080?
62% chance
100GW AI training run before 2031?
37% chance
Will any 10 trillion+ parameter language model that follows instructions be released to the public before 2026?
48% chance
AI: Will someone train a $1T model by 2050?
81% chance
AI: Will someone train a $10B model by 2030?
83% chance
AI: Will someone open-source a $1B model by 2030?
38% chance
AI: Will someone open-source a $100M model by 2030?
69% chance