Will taking annual MRIs of the smartest alignment researchers turn out alignment-relevant by 2033?
18
Ṁ1kṀ8372034
7%
chance
1H
6H
1D
1W
1M
ALL
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
People are also trading
Will "Defining alignment research" make the top fifty posts in LessWrong's 2024 Annual Review?
6% chance
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
34% chance
Will I still work on alignment research at Redwood Research in 3 years?
66% chance
Will some piece of AI capabilities research done in 2023 or after be net-positive for AI alignment research?
81% chance
Conditional on their being no AI takeoff before 2050, will the majority of AI researchers believe that AI alignment is solved?
52% chance
Will a major AI alignment office (eg Constellation/Lightcone/HAIST) give out free piksters to alignment ppl by EOY 2027?
43% chance
Will a >$10B AI alignment megaproject start work before 2030?
37% chance
Conditional on not having died from unaligned AGI, I consider myself a full time alignment researcher by the end of 2030
34% chance
Will deceptive misalignment occur in any AI system before 2030?
81% chance
Will an AI built to solve alignment wipe out humanity by 2100?
12% chance
Sort by:
@ArnavBansal ha thanks! MRI alone wouldn’t be sufficient and assumes incomplete a priori knowledge with brain activity and any possible alignment output (based on what we know now)
People are also trading
Related questions
Will "Defining alignment research" make the top fifty posts in LessWrong's 2024 Annual Review?
6% chance
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
34% chance
Will I still work on alignment research at Redwood Research in 3 years?
66% chance
Will some piece of AI capabilities research done in 2023 or after be net-positive for AI alignment research?
81% chance
Conditional on their being no AI takeoff before 2050, will the majority of AI researchers believe that AI alignment is solved?
52% chance
Will a major AI alignment office (eg Constellation/Lightcone/HAIST) give out free piksters to alignment ppl by EOY 2027?
43% chance
Will a >$10B AI alignment megaproject start work before 2030?
37% chance
Conditional on not having died from unaligned AGI, I consider myself a full time alignment researcher by the end of 2030
34% chance
Will deceptive misalignment occur in any AI system before 2030?
81% chance
Will an AI built to solve alignment wipe out humanity by 2100?
12% chance