If Elon Musk does something as a result of his AI angst by 2025, will Eliezer Yudkowsky judge it to be a positive or neutral initiative (as opposed to negative)?
➕
Plus
167
Ṁ62k
2025
17%
chance

Context.

1. https://manifold.markets/Writer/will-elon-musk-do-something-as-a-re?r=V3JpdGVy

2. https://twitter.com/elonmusk/status/1629901954234105857

Feb 26, 8:56pm: If Elon Musk does something as a result of his AI angst by 2025, will Eliezer Yudkowsky judge it to be a dignified or neutral initiative (as opposed to negative/undignified)? → If Elon Musk does something as a result of his AI angst by 2025, will Eliezer Yudkowsky judge it to be a positive or neutral initiative (as opposed to negative)?

Edit: an "initiative" should be something relatively momentous, such a founding an organization or financing it for over $10M. Tweets, signatures, etc. don't count.

Get
Ṁ1,000
and
S3.00
Sort by:

How does this resolve if Elon does more than one thing?

How does this resolve if Yudkowsky doesn't judge it?

predicts NO

@MartinRandall

If Yudkowsky doesn't judge by the close date, it resolves N/A.

It looks like what he needs to judge is x.ai.

If Elon Musk ends up doing something else of the same magnitude and Eliezer judges it as having the opposite sign, the question resolves N/A, but this seems very unlikely to come about.

predicts NO

@Writer Will you resolve before the close date if Eliezer judges x.ai as negative and Musk appears unlikely to have a second AI angst project of similar magnitude?

predicts NO

@adele Yes, if Eliezer gives a judgement, I'll resolve the market

Elon signed a petition to pause AI research and dedicate efforts to safety. All that's left is EY saying that this isn't a negative initiative.

predicts NO

@kinrany A signature there is barely at the level of a tweet. Doesn't nearly count as "initiative".

predicts NO

@kinrany Also last I heard there was woefully insufficient verification on that form and they had to remove at least one big name who had not in fact signed it, so given Musk's recent interest in starting a new AI org, I'm currently below 50% that Musk actually signed this letter.

predicts NO

@BenjaminCosman Musk seems to be the main beneficiary though, people already speculate that he did this to actually get his new company to catch up.

predicts NO

@b575 Folks have also pointed out that the whole point of treaties about collective action problems is that any unilateral concession is not to your advantage, so it's not necessarily contradictory to do something even as one tries to get the collective (including yourself) to agree to stop doing the thing. I believe I was wrong to assign such a low probability here (and I'm currently at 98+ yes instead, since they do now claim that they've independently verified the remaining big names like Musk).

predicts NO

@Gurkenglas Why such confidence in Elon?

predicts YES

@GarrettBaker Resolution criteria unclear, plus a good chance of nothingburger.

predicts NO

From this Reuters article. Elon Musk: "I'm a little worried about the AI stuff [...] We need some kind of, like, regulatory authority or something overseeing AI development [...] make sure it's operating in the public interest. It's quite dangerous technology. I fear I may have done some things to accelerate it."

Elon Musk does not appear to understand AI alignment, although he understands that AI is very dangerous, which is a lot better than very many people in the world. So this could go either way. I guess it depends on if he hires someone who is prepares to educate him, and doesn't fire them for disagreeing with him stridently.

predicts NO

@RobinGreen Most things I expect him to do would be net negative. The only advantage he has is a shit ton of money, but any apparatus he tries to set up here will surely be goodhearted to oblivion, and because improving capabilities is easier than advancing alignment, and he’ll likely at least be able to distinguish between project speed related to vs unrelated to AI, he will likely inadvertently end up funding capabilities work even if he’s more concerned about existential risk than woke AI.

predicts NO

@GarrettBaker speed -> s

predicts NO

@GarrettBaker I think you meant goodharted. Goodhearted means "well-meaning".

Suppose it turns out that for convoluted reasons that nobody saw coming, the initiative technically turns out to leave us better off than if he hadn't done it. Does this resolve by Eliezers judgement upon hearing Elon's decision, or which of any number of variants?

@GarrettBaker very unlikely that EY would like that

@harfe sound's p orthogonal to his concerns imo

@AlexAmadori Starting yet another AGI lab would likely be ranked as Very Bad by Eliezer standards, and if Elon's main concern about OpenAI is that its too woke, then certainly he hasn't learned anything new.

predicts YES

@GarrettBaker i may be missing something cause I didn't want to sign up to the news letter. i agree he doesn't seem to have learned anything new

predicts NO

@tom I saw this, but thanks anyway. The tweet however, made me slightly update in the YES direction.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules