Will Scott Aaronson be at least as optimistic about AI alignment next year?
35
Ṁ360Ṁ1.3kresolved Jul 3
Resolved
YES1H
6H
1D
1W
1M
ALL
Scott Aaronson just announced that he's working at OpenAI for the next year: https://scottaaronson.blog/?p=6484. In this post he also says "just as Eliezer became more and more pessimistic about the prospects for getting anywhere on AI alignment, I’ve become more and more optimistic".
Resolves YES if based on Scott Aaronson's blog posts or other statements after about 1 year, he generally is at least as optimistic about AI alignment as he is today; NO if he's less optimistic.
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
🏅 Top traders
| # | Trader | Total profit |
|---|---|---|
| 1 | Ṁ128 | |
| 2 | Ṁ62 | |
| 3 | Ṁ45 | |
| 4 | Ṁ31 | |
| 5 | Ṁ26 |
People are also trading
Will some piece of AI capabilities research done in 2023 or after be net-positive for AI alignment research?
81% chance
Which Scott Aaronson AI world will come to pass?
AI honesty #2: by 2027 will we have a reasonable outer alignment procedure for training honest AI?
24% chance
Conditional on their being no AI takeoff before 2050, will the majority of AI researchers believe that AI alignment is solved?
52% chance
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
34% chance
Will ARC's Heuristic Arguments research substantially advance AI alignment before 2027?
26% chance
Which Scott Aaronson AI world will come to pass? (Metaculus forecast, 2027-12-31)
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
33% chance
Will a Turing Award be given out for work on AI alignment or existential safety by 2040?
79% chance
Is AI alignment computable?
50% chance
Sort by:
It's getting close to the 1 year mark. What do people assess about how optimistic Scott is on AI alignment recently? He's talked about the topic a lot (example post https://scottaaronson.blog/?p=7230 ) but I haven't been following his posts closely so I'm not sure, if anyone else has a sense let me know.
I still don't know how to resolve this - he's posted some long interviews about AI safety recently but I don't have time to listen to them.
If traders could provide evidence/info/data, that would be helpful. Especially the people who traded recently - @KatjaGrace you seem quite confident.
@jack I just asked Scott and he said "I’d say I’m about the same amount of optimistic as a year ago — so that’s a yes I guess".
People are also trading
Related questions
Will some piece of AI capabilities research done in 2023 or after be net-positive for AI alignment research?
81% chance
Which Scott Aaronson AI world will come to pass?
AI honesty #2: by 2027 will we have a reasonable outer alignment procedure for training honest AI?
24% chance
Conditional on their being no AI takeoff before 2050, will the majority of AI researchers believe that AI alignment is solved?
52% chance
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
34% chance
Will ARC's Heuristic Arguments research substantially advance AI alignment before 2027?
26% chance
Which Scott Aaronson AI world will come to pass? (Metaculus forecast, 2027-12-31)
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
33% chance
Will a Turing Award be given out for work on AI alignment or existential safety by 2040?
79% chance
Is AI alignment computable?
50% chance

