I have been conducting an informal survey of AI safety experts to elicit their opinions on various topics. I will end up with responses from around 20 people, including researchers at DeepMind, Anthropic, Redwood, FAR AI, and others. The sample was pseudo-randomly selected, optimising for a) diversity of opinion, b) diversity of background, c) seniority, and d) who I could easily track down.
One of my questions was: "What research direction do you think will reduce existential risk the most?" I asked participants to answer from their inside view as much as possible.
Which theme of answer came up most often?
I will resolve this question when the post for this survey is published, which will happen some time between March and June. Thanks to Rubi Hudson for suggesting turning this into a prediction market.
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ41 | |
2 | Ṁ26 | |
3 | Ṁ25 | |
4 | Ṁ9 | |
5 | Ṁ2 |