
In 2025, will I believe that aligning automated AI research AI should be the focus of the alignment community?
22
1kṀ1027resolved Jan 21
Resolved
NO1H
6H
1D
1W
1M
ALL
Resolves positively if I believe the plurality of the AI safety field should be focused on this problem. Examples of other focuses would be robotic safety, AI boxing, corrigibility, etc. If I believe some theoretical issue should be a priority/focus with automated AI research as the most likely first application, then this also resolves positively.
An incomprehensive list of things that I consider to be automated AI research: (1) hardware innovation like chip design (2) writing ML papers or abstracts (3) writing neural network code (4) automating prompt engineering (5) automated theorem proving applied to ML-motivated problems.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ178 | |
2 | Ṁ69 | |
3 | Ṁ27 | |
4 | Ṁ22 | |
5 | Ṁ14 |
People are also trading
Related questions
Will we solve AI alignment by 2026?
1% chance
Will some piece of AI capabilities research done in 2023 or after be net-positive for AI alignment research?
81% chance
Conditional on their being no AI takeoff before 2050, will the majority of AI researchers believe that AI alignment is solved?
52% chance
By the end of 2025, which piece of advice will I feel has had the most positive impact on me becoming an effective AI alignment researcher?
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
34% chance
Will Meta AI start an AGI alignment team before 2026?
45% chance
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
33% chance
Will xAI significantly rework their alignment plan by the start of 2026?
32% chance
Will I focus on the AI alignment problem for the rest of my life?
62% chance
Will ARC's Heuristic Arguments research substantially advance AI alignment before 2027?
26% chance