Will the Time article and the open letter lead to the marginalization of AI safety concerns within AI capabilities?
Basic
26
Ṁ3319
resolved Nov 13
Resolved
NO

Resolves based on general vibes and agreement. It doesn't have to lead to universal marginalization, only marginalization within AI capabilities groups.

For instance, currently some AI capabilities groups have written out some statements saying that maybe there's some existential risk and we vaguely need to be careful. If they retract those statements and argue that alignment people are crazy or incompetent, this question resolves yes.

There's already been some capabilities people arguing that alignment people are crazy or incompetent, so in order to resolve yes they have to start doing so to a greater degree, without also starting defending them to a greater degree.


I won't be betting in this market for objectivity reasons, but I suspect it will resolve to yes for the following reasons:

I call it "high-energy memes". I assume that people here are familiar with the concept of a meme; an idea that can be shared from person to person, and spread throughout society. By "high-energy", I mean a meme that in some sense demands a lot of action, or shifts the political landscape a lot, or similar. For instance, one high-energy meme is "AGI will most likely destroy civilization soon"; taken seriously, it demands strong interventions on AGI development, and if such interventions are not taking, it recommends strong differences in life choices (e.g. less long-term planning, more enjoying the little time we have left).

One can create lots of high-energy memes, and most conceivable high-energy memes are false and harmful. (E.g. "if you masturbate then you will burn in hell unless you repent and strongly act to support our religion".) Furthermore, even if a high-energy meme originates from a source that is accurate and honest, it may be transformed along the process of sharing, and the original source may not be available, which may make it less constructive in practice.

Since high-energy memes tend to be bad, lots of social circles have created protections to suppress high-energy memes. But these protections also suppress important high-energy memes such as AGI risk. And they also tend to be irrational and exploitable, and to be able to protect the people in power from being held accountable.

(This model was originally written for a different context than AI safety, but it is partly inspired by AI safety.)

Get
Ṁ1,000
and
S3.00
Sort by:

@tailcalled based on behavior so far from AI labs I think this would resolve no, so far, do you agree? Ie, are we now betting on whether there will be visibly increased marginalization in the rest of the year?

@MartinRandall I agree, at least my current vibe is that it (or GPT-4) has done the opposite, lead to people taking AI safety concerns much more seriously. I expect this market to resolve NO, and there's only really two main ways I could see it resolves YES:

  1. We could imagine some sort of major scandal or movement happens based on the letters. For instance maybe some very famous person like Joe Biden makes a big deal out of Eliezer Yudkowsky's letter being dangerous, or maybe some very famous person who is very disrespected in AI firms like Donald Trump makes a big deal out of Eliezer Yudkowsky's letter being the truth. But even then I don't know if this is enough to counteract the popularization effect. Or maybe the letter triggers counterproductive regulation that makes AI safety concerns unpopular (though attribution would be hard here, and if we can't convincingly attribute it to the letter/article then the market will still resolve NO).

  2. Maybe my current vibe turns out to have misunderstood something and some leading AI safety people will come in and explain how the article/letter were actually a mistake.

predictedNO

@tailcalled

/EsbenKran/will-there-be-an-antiai-terrorist-i-fa3a2721ed32

So I guess if that is yes AND the terrorist claims inspiration by Yudkowsky then that's a path to YES.

@MartinRandall Maybe. I think it might be inappropriate to count the terrorist incident unless discourse afterwards specifically centers on Yudkowsky's Time article?

predictedNO

@tailcalled That's fair. Yudkowsky minus the letter has been read as endorsing extreme actions in the face of cosmic loss of value, there would need to be some claim that the Time article was the tipping point.

Yudkowsky did nothing wrong

It hasn’t even been a day and the memetic immune system firing up a huge antibody response to Yudkowsky’s Time article is hard to watch. The war over the new Overton boundary will be a knife fight.

Made some related markets people might be interested in:

predictedYES

@CalebWithers 2030 is too far out, try 2025

predictedYES

@L Agree that could be good, but I'm also interested in trying to look ahead to what we'll think once the dust has settled and the implications becoming more clear

I'm always right about everything check my trading history

There are already people saying that Eliezer advocates violence, which is a terrible oversimplification of his article (every government policy is backed by government violence, unless you are an AnCap you "advocate for violence" in this sense, and even AnCaps tend to advocate for their own equivalent violent institutions).

This tendency of misrepresenting other's views is one of the "protections to suppress high-energy memes" I am talking about. Though the market doesn't automatically resolve YES based on this, it would require that e.g. OpenAI buys into it by declaring that alignment people are terrorists or something.

predictedYES

"oh, don't worry, it's state violence, only an ancap would dislike that" said the classical liberal to the crowd, confident that no one would ever consider creating prosocial networks that do not depend on centralized mechanisms -

you ... do know about those who would discard not just the crowns of kings but also the crown of currency, yes?

like, my sense is that without ai augmented mutualism safety cannot be solved even in principle; of course states were going to make a mess of things, but yudkowsky advocating for it is not going to make life easier

predictedYES

certainly defensive systems are needed to end violence forever, but we should end violence forever

predictedYES

@L defensive systems eg big ass shields for all

@L AFAIK offense is more powerful than defense

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules