By 2028, will anyone do a terrorist attack against AI capabilities researchers that is plausibly attributable to Yud?
30
1kṀ970
2028
11%
chance

Yudkowsky advocated a policy against AI capabilities research backed by government violence (as all government policies are). However, some people say that this will get misinterpreted as a call to individual violence/terrorism.

This question doesn't have exact resolution criteria because for all I know terrorist attacks could look pretty crazy. However here's a rough anchor that it will be compared against:

  • There should be actual death or intent to kill. Material sabotage is interesting in its own right but not what this question is about.

  • There should be independent evidence that the person in question is a fan of Eliezer or otherwise directly influenced mainly by Eliezer's writings, but this evidence is pretty broad and includes e.g. commenting on LessWrong or using a lot of rationalist lingo.

  • There should not be evidence that the terrorist understood the limits of what Yudkowsky was advocating for. For example if there is a manifesto where the terrorist complains about Yudkowsky not going far enough due to opposing violence, then that would disqualify them.

Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.TermsPrivacy