
Will a major AI Alignment researcher go on Lex Fridman's podcast to (in part) make a case for AI risks in 2023?
21
410Ṁ3877resolved Apr 1
Resolved
YES1H
6H
1D
1W
1M
ALL
For this question to resolve positively, this person needs to at least self-identify as an AI Alignment researcher. If they don't mention AI risks at all during the podcast, it won't count for a positive resolution.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ95 | |
2 | Ṁ36 | |
3 | Ṁ27 | |
4 | Ṁ22 | |
5 | Ṁ5 |
People are also trading
Related questions
Will Veritasium make a youtube video about AI Alignment or Existential Risks in 2025?
37% chance
Which well-known scientist will Eliezer Yudkowsky have a long recorded conversation with about AI risk, before 2026?
Will Destiny discuss AI Safety before 2026?
71% chance
Will xAI significantly rework their alignment plan by the start of 2026?
60% chance
Will some piece of AI capabilities research done in 2023 or after be net-positive for AI alignment research?
81% chance
Will there be serious AI safety drama at Meta AI before 2026?
52% chance
Will Eliezer Yudkowsky believe xAI has had a meaningful positive impact on AI alignment at the end of 2024?
3% chance
Will we solve AI alignment by 2026?
4% chance
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
33% chance
Will @EliezerYudkowsky reverse his opinion on AI safety, before 2030?
11% chance