Will Aidan McLau's claim that very large models are "refusing instruction tuning" be validated by 2030?
Basic
4
Ṁ552030
42%
chance
1D
1W
1M
ALL
https://x.com/aidan_mclau/status/1859444783850156258 According to Aidan McLau the reason large models are not being released is because these models are resisting instruct tuning. Resolves yes if a current or former AI researcher at Google, OpenAI, Anthropic, or Meta validates this claim or it confirmed independently by research.
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Related questions
Related questions
AI: Will someone train a $1B model by 2025?
67% chance
Will models be able to do the work of an AI researcher/engineer before 2027?
36% chance
AI: Will someone open-source a $100M model by 2025?
64% chance
Will a model costing >$30M be intentionally trained to be more mechanistically interpretable by end of 2027? (see desc)
57% chance
AI: Will someone train a $10T model by 2100?
57% chance
By March 14, 2025, will there be an AI model with over 10 trillion parameters?
62% chance
AI: Will someone open-source a $10M model by 2025?
74% chance
AI: Will someone train a $1T model by 2080?
62% chance
AI: Will someone train a $100M model by 2025?
85% chance
AI: Will someone open-source a $10M model by 2025?
74% chance