
In a year from today, will I have a satisfactory framework for describing the epistemology of AI alignment?
8
190Ṁ515resolved Jul 2
Resolved as
60%1H
6H
1D
1W
1M
ALL
I've written some stuff about this topic here: https://forum.effectivealtruism.org/s/sC8KoZx9jAdrEtmHj
By "satisfactory" I mean from my perspective given the research I'm doing this year.
Mar 28, 3:40pm: In a year from today, will I have a satisfactory framework for describing the epistemology of AI alignment? → In a year from today, will I have a satisfactory framework for describing the epistemology of AI alignment?
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ5 | |
2 | Ṁ2 | |
3 | Ṁ0 |
People are also trading
Related questions
Will we solve AI alignment by 2026?
1% chance
By the end of 2025, which piece of advice will I feel has had the most positive impact on me becoming an effective AI alignment researcher?
Will some piece of AI capabilities research done in 2023 or after be net-positive for AI alignment research?
81% chance
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
33% chance
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
34% chance
Conditional on their being no AI takeoff before 2050, will the majority of AI researchers believe that AI alignment is solved?
52% chance
Will ARC's Heuristic Arguments research substantially advance AI alignment before 2027?
26% chance
Will an AI produce encyclopedia-worthy philosophy by 2026?
15% chance
Will there be a well accepted formal definition of value alignment for AI by 2030?
25% chance
Will Meta AI start an AGI alignment team before 2026?
45% chance