Will taking annual MRIs of the smartest alignment researchers turn out alignment-relevant by 2033?
14
69
Ṁ638Ṁ310
2034
8%
chance
1D
1W
1M
ALL
Get Ṁ600 play money
Sort by:
@ArnavBansal ha thanks! MRI alone wouldn’t be sufficient and assumes incomplete a priori knowledge with brain activity and any possible alignment output (based on what we know now)
AI Alignment questions
By the end of 2026, will we have transparency into any useful internal pattern within a Large Language Model whose semantics would have been unfamiliar to AI and cognitive science in 2006?
49% chance
What percentage of Manifold poll respondents will agree that weak AGI has been achieved at the end of June 2024?
0.00
Related questions
Will there exist a compelling demonstration of deceptive alignment by 2026?
68% chance
Will OpenAI's Superalignment project produce a significant breakthrough in alignment research before 2027?
38% chance
Will there be a very reliable way of reading human thoughts by the end of 2030? 🧠🕵️
37% chance
Will an AI alignment research paper be featured on the cover of a prestigious scientific journal? (2024)
30% chance
Will I think that alignment is no longer "preparadigmatic" by the start of 2026?
28% chance
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
37% chance
In 2025, will I believe that aligning automated AI research AI should be the focus of the alignment community?
48% chance
Will tailcalled think that the Infrabayesianism alignment research program has achieved something important by October 20th, 2026?
31% chance
Will I still work on alignment research at Redwood Research in 3 years?
55% chance
Conditional on not having died from unaligned AGI, I consider myself a full time alignment researcher by the end of 2030
34% chance