Skip to main content
MANIFOLD
Will an AI Doomer turn to violence by the end of 2026?
54
Ṁ1kṀ4.5k
2027
89%
chance

An human will commit violence against another human in the fight against AI.

This will resolve to yes if AI doomers target a prominent AI researcher/executive for violence and either succeeds or is arrested in the planning stage as terrorism ala Ted K.

Market context
Get
Ṁ1,000
to start trading!
Sort by:

Resolves yes:

https://www.nbcbayarea.com/news/local/suspect-molotv-cocktail-attack-altma-sf-id/4067083/

https://x.com/i/status/2042793949858009192

@AlexanderLeCampbell are you waiting for the sentencing to resolve this..? It seems fairly undeniable now that there are 3 distinct culprits.

@DylanRichardson "and either succeeds or is arrested in the planning stage as terrorism ala Ted K"

I wonder if that actually qualifies as 'succeeds'?

Would any of those culprits say they were successful?

Not to dismiss violence, but market is written in a way that does not clearly indicate that this qualifies.

@JoeandSeth I think it would be quite a stretch to interpret this market as requiring that terrorism serves as a reliable way to achieve political goals!

If it is a more direct violent intent that you want, just take the fact that they arrested him while he was trying to break down the door of OpenAI in order to set a fire and "kill everyone inside".

@DylanRichardson in the middle of breaking down a door is neither in the planning stage nor after a 'successful' attack

I'm just reading the words. "Intent" isnt among them, and neither are "terrorism show to reliably accomplish goals" - which i think is an absurd thing to claim either way. In the vast majority of cases, terrorism is counterproductive, not least due to the easily anticipated polarization that results.

I dislike the market as it currently stands. The title of the market is easily interpreted as to be about a very serious and specific event, about a specific group of people "turning to violence". The resolution, however, is about a much weaker claim: a single "generally anti tech" person taking violence would count.

A silly example to communicate how I feel about the mismatch: imagine there was a market "Will the Swiss turn to violence by the end of 2026?" This could be interpreted being about Switzerland declaring war or something. A resolution criterion of "at least one person living in Europe commits violence" would then feel very off.

In particular, the mismatch being pointed at a group of people feels very bad to me. I think a person only seeing "Will AI Doomers turn to violence by the end of 2026? 34% chance" is left with a very misleading picture of the situation/market. A Swiss person could understandably feel like the market is trying to portray Switzerland in a bad light.

@AlexanderLeCampbell strong agree with this

At the very least, the title should be changed to "Will any AI Doomer ..."

But actually based on below clarifications, it should be more like "Will any anti-AI or anti-tech person ..."

@jack changed to "Will an AI Doomer..."

Does “doomer” mean “generally anti-AI” or specifically someone who thinks AI will cause human extinction?

bought Ṁ10 YES

@GraceKind Generally anti ai or anti tech. Blowing up a server farm that leads to human causalities would qualify. Broad definition as we want a sensitive instrument here.

Anti tech != doomer... There are so many categories of people who are both anti tech and not AI doomer. And also, most AI doomers are not anti-AI - they are anti-bad-AI and pro-good-AI.

@jack Exactly. Most folks concerned about AGI are otherwise very pro-tech and often even pro-aligned-AGI.

@a2bb Their market has a min of three causalities.

Mine is about an attempt. So would expect it to trade well above theirs.

plus ‘24 expiry vs ‘26