Resolves only if someone kills an AI researcher, AI company founder, or similar, in order to protest AI development, create fear to scare people away from AI development, or similar.
Resolves YES if afterwards, AI xrisk people are generally considered dangerous and struggle to get their message out. Since this is kind of vague and vibes-based, I will not bet. I may choose to resolve this market based on a poll of AI alignment researchers after the fact.
"I may choose to resolve this market based on a poll of AI alignment researchers after the fact."
Seems pretty irrational to me.
"If an Irish republican commits a terrorist attack will Irish republicanism be viewed in a negative light afterwards? To resolve this I will ask these 12 prominent Irish republicans."
This would seem like a pretty bad market designt to me. And I'm not sure why this market would be especially different from that.
My main worry is that Alignment researchers have a much higher P(doom) than the general public so they might be a lot more tolerant of terrorism or might secretly be in favor even though they don't say so outright.
(e.g. in a recent interview Holly Elmore [who generally seems like a reasonable person but has a high P{doom}] didn't sound like she would be especially against AI-xrisk terrorism, I don't remember her exact words but it didn't seem like she would be 99% against it but maybe only 55% against it or so.)
And while one could theoretically be in favor of terrorism and still judge the reputational consequences in an unbiased way it just doesn't seem like that would be the pest person to ask.
@justifieduseofFallibilism Two complications to your reasoning:
I said "after the fact", i.e. someone would first do an act of terrorism, then I would give it a bit of time to settle, and then I would ask the researchers. I don't know how much of a practical difference that would make (intuitively in your Irish republican situation, I wouldn't expect it to change much), but I think it at least changes what it would look like.
I said "may", i.e. if I feel like the answer is obvious, or I feel like AI researchers seem biased/in denial, I would probably not resolve it based on a poll, but instead just resolve it YES.
I mainly included it to give myself an out in case it's annoyingly ambiguous and I need an actual resolution.
@tailcalled Makes sense. Just make sure to not only listen to Less Wrong type people when resolving this market, especially if al the LWers think x and all the non LWers think y. (But it sounds like you were already planning on being cautious👍).
@justifieduseofFallibilism Note that ultimately, I am a Less Wrong type person: https://www.lesswrong.com/users/tailcalled
@tailcalled Same here, that's why I am so weary of the biases humans have and how groupthink can impact judgement.👍
https://manifold.markets/BenjaminIkuta/will-anyone-commit-violence-in-orde?r=QmVuamFtaW5Ja3V0YQ
Roko: "We should stop developing AI, we should collect and destroy the hardware and we should destroy the chip fab supply chain that allows humans to experiment with AI at the exaflop scale."
People making statements like that suggests we are not many steps from the terrorism.
I don't think it will completely marginalize the worries, but I do think it will cause them to be taken less seriously, the same way anti-abortion terrorism weakened the public perception of pro-life positions.
@DavidBolin Roko doesn't seem like the terrorist type to me, and also this isn't gonna work unless one has a superpower to support the destruction.
@tailcalled I'm not saying you should change this market now, but if someone has to die I think this is a bad definition. There are plenty of terrorist attacks where no one dies that still get lots of press coverage.
@DanielFilan Idk, I didn't specify a day in this market so I'm not sure I should change the conditions.