
Conditional on not having died from unaligned AGI, I consider myself a full time alignment researcher by the end of 2030
16
1kṀ5502030
34%
chance
1H
6H
1D
1W
1M
ALL
I suspect that the primary mechanism by which this market resolves to NO would be either burnout or running out of funding. However, do not be limited to these mechanisms when trading.
Relevant market: https://manifold.markets/AlanaXiang/will-i-consider-myself-a-fulltime-a
I do not intend to buy shares in this market (either YES or NO).
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Anthropic publishes research on Claude self-reflection and AGI alignment before December 15, 2025
50% chance
Will we get AGI before 2035?
64% chance
Will Meta AI start an AGI alignment team before 2026?
15% chance
By the end of 2025, which piece of advice will I feel has had the most positive impact on me becoming an effective AI alignment researcher?
Will we solve AI alignment by 2026?
2% chance
What will be true about AGI and longevity in 2040?
Will we reach "weak AGI" by the end of 2025?
3% chance
Will I think that alignment is no longer "preparadigmatic" by the start of 2026?
18% chance
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
34% chance
Conditional on their being no AI takeoff before 2050, will the majority of AI researchers believe that AI alignment is solved?
52% chance
Sort by:
People are also trading
Related questions
Anthropic publishes research on Claude self-reflection and AGI alignment before December 15, 2025
50% chance
Will we get AGI before 2035?
64% chance
Will Meta AI start an AGI alignment team before 2026?
15% chance
By the end of 2025, which piece of advice will I feel has had the most positive impact on me becoming an effective AI alignment researcher?
Will we solve AI alignment by 2026?
2% chance
What will be true about AGI and longevity in 2040?
Will we reach "weak AGI" by the end of 2025?
3% chance
Will I think that alignment is no longer "preparadigmatic" by the start of 2026?
18% chance
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
34% chance
Conditional on their being no AI takeoff before 2050, will the majority of AI researchers believe that AI alignment is solved?
52% chance
