Will Scott Aaronson be at least as optimistic about AI alignment next year?
35
360Ṁ1261resolved Jul 3
Resolved
YES1H
6H
1D
1W
1M
ALL
Scott Aaronson just announced that he's working at OpenAI for the next year: https://scottaaronson.blog/?p=6484. In this post he also says "just as Eliezer became more and more pessimistic about the prospects for getting anywhere on AI alignment, I’ve become more and more optimistic".
Resolves YES if based on Scott Aaronson's blog posts or other statements after about 1 year, he generally is at least as optimistic about AI alignment as he is today; NO if he's less optimistic.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ128 | |
2 | Ṁ62 | |
3 | Ṁ45 | |
4 | Ṁ31 | |
5 | Ṁ26 |
People are also trading
Will we solve AI alignment by 2026?
2% chance
Will xAI significantly rework their alignment plan by the start of 2026?
20% chance
Will some piece of AI capabilities research done in 2023 or after be net-positive for AI alignment research?
81% chance
Which Scott Aaronson AI world will come to pass?
Will Meta AI start an AGI alignment team before 2026?
34% chance
Will AI convincingly mimic Scott Alexander's writing in style, depth, and insight before 2026?
7% chance
Will Anthropic be the best on AI safety among major AI labs at the end of 2025?
92% chance
Will "The Field of AI Alignment: A Postmortem, and ..." make the top fifty posts in LessWrong's 2024 Annual Review?
28% chance
Conditional on their being no AI takeoff before 2050, will the majority of AI researchers believe that AI alignment is solved?
52% chance
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
34% chance
Sort by:
It's getting close to the 1 year mark. What do people assess about how optimistic Scott is on AI alignment recently? He's talked about the topic a lot (example post https://scottaaronson.blog/?p=7230 ) but I haven't been following his posts closely so I'm not sure, if anyone else has a sense let me know.
I still don't know how to resolve this - he's posted some long interviews about AI safety recently but I don't have time to listen to them.
If traders could provide evidence/info/data, that would be helpful. Especially the people who traded recently - @KatjaGrace you seem quite confident.
@jack I just asked Scott and he said "I’d say I’m about the same amount of optimistic as a year ago — so that’s a yes I guess".
People are also trading
Related questions
Will we solve AI alignment by 2026?
2% chance
Will xAI significantly rework their alignment plan by the start of 2026?
20% chance
Will some piece of AI capabilities research done in 2023 or after be net-positive for AI alignment research?
81% chance
Which Scott Aaronson AI world will come to pass?
Will Meta AI start an AGI alignment team before 2026?
34% chance
Will AI convincingly mimic Scott Alexander's writing in style, depth, and insight before 2026?
7% chance
Will Anthropic be the best on AI safety among major AI labs at the end of 2025?
92% chance
Will "The Field of AI Alignment: A Postmortem, and ..." make the top fifty posts in LessWrong's 2024 Annual Review?
28% chance
Conditional on their being no AI takeoff before 2050, will the majority of AI researchers believe that AI alignment is solved?
52% chance
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
34% chance