According to 20 AI safety experts, what is the biggest mistake the AI safety community has made in the past?
➕
Plus
23
Ṁ576
resolved May 23
100%15%
Too much abstract work (e.g. agent foundations) / not enough empirical work (e.g. interpretability)
27%
Starting/supporting the big AI companies, contributing to race dynamics
25%
Too much focus on technical research, not enough on governance, outreach & advocacy
18%
Bad messaging (e.g. Yudkowsky & PauseAI are too extreme)
5%
Capabilities externalities caused by AI safety research
9%Other

I have been conducting an informal survey of AI safety experts to elicit their opinions on various topics. I will end up with responses from around 20 people, including researchers at DeepMind, Anthropic, Redwood, FAR AI, and others. The sample was pseudo-randomly selected, optimising for a) diversity of opinion, b) diversity of background, c) seniority, and d) who I could easily track down.

One of my questions was: "Are there any big mistakes the AI safety community has made in the past?" I asked participants to answer from their inside view as much as possible.

Which theme of answer came up most often?

I will resolve this question when the post for this survey is published, which will happen some time between March and June. Thanks to Rubi Hudson for suggesting turning this into a prediction market.

Get
Ṁ1,000
and
S3.00
Sort by:

Link to survey results?

This is an extremely interesting market

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules