If AI safety is divided by left/right politics in the next 5 years, will the left be more pro-regulation than the right?
31
106
610
2028
76%
chance

I'll resolve this entirely subjectively. (I'll stay out of betting in this market though.)

Resolves NA if AI safety doesn't become politically divided.

Get Ṁ1,000 play money
Sort by:

Do you include Fairness, Accountability, and Transparency as part of AI safety, or is this specifically about existential risk?

@ahalekelly Just x-risk

@NathanNguyen So if the left is more pro-regulation than the right because of 99% fairness diversity inclusion reasons and 1% x-risk how would you resolve? What if it’s 90-10 or 50-50?

@mariopasquato I’m not sure it makes sense to quantify things in that way. It’s more of a “I know it when I see it” kind of thing.

@NathanNguyen Any idea on how to make this less arbitrary? Where does x-risk end and concerns about jobs and discrimination begin? Right now if you read the famous open letter signed by Bengio, Wozniak etc… would you conclude that the motive is x-risk or just negative economic and social impact?

predicts YES

Always

predicts YES

@Gigacasting that is why they are the ones trying to outlaw medication, right? oh wait, no

The definition of ai has been stretched to be meaningless. Is the tiktok ban ai? Every service pretends to be ai.

@DerkDicerk I mean AI of the sort that people fear will end humanity

@NathanNguyen those are autonomous weapons regulations and are nonpartisan

@DerkDicerk I think autonomous weapons aren’t the kind of thing AI safety folks are concerned about ending humanity