Context.
1. https://manifold.markets/Writer/will-elon-musk-do-something-as-a-re?r=V3JpdGVy
2. https://twitter.com/elonmusk/status/1629901954234105857
Feb 26, 8:56pm: If Elon Musk does something as a result of his AI angst by 2025, will Eliezer Yudkowsky judge it to be a dignified or neutral initiative (as opposed to negative/undignified)? → If Elon Musk does something as a result of his AI angst by 2025, will Eliezer Yudkowsky judge it to be a positive or neutral initiative (as opposed to negative)?
Edit: an "initiative" should be something relatively momentous, such a founding an organization or financing it for over $10M. Tweets, signatures, etc. don't count.
@MartinRandall
If Yudkowsky doesn't judge by the close date, it resolves N/A.
It looks like what he needs to judge is x.ai.
If Elon Musk ends up doing something else of the same magnitude and Eliezer judges it as having the opposite sign, the question resolves N/A, but this seems very unlikely to come about.
Eliezer retweeted this recently:
https://twitter.com/AISafetyMemes/status/1647101515025498112?cxt=HHwWgIDSgb2G19stAAAA
@kinrany A signature there is barely at the level of a tweet. Doesn't nearly count as "initiative".
@kinrany Also last I heard there was woefully insufficient verification on that form and they had to remove at least one big name who had not in fact signed it, so given Musk's recent interest in starting a new AI org, I'm currently below 50% that Musk actually signed this letter.
@BenjaminCosman Musk seems to be the main beneficiary though, people already speculate that he did this to actually get his new company to catch up.
@b575 Folks have also pointed out that the whole point of treaties about collective action problems is that any unilateral concession is not to your advantage, so it's not necessarily contradictory to do something even as one tries to get the collective (including yourself) to agree to stop doing the thing. I believe I was wrong to assign such a low probability here (and I'm currently at 98+ yes instead, since they do now claim that they've independently verified the remaining big names like Musk).
From this Reuters article. Elon Musk: "I'm a little worried about the AI stuff [...] We need some kind of, like, regulatory authority or something overseeing AI development [...] make sure it's operating in the public interest. It's quite dangerous technology. I fear I may have done some things to accelerate it."
Elon Musk does not appear to understand AI alignment, although he understands that AI is very dangerous, which is a lot better than very many people in the world. So this could go either way. I guess it depends on if he hires someone who is prepares to educate him, and doesn't fire them for disagreeing with him stridently.
@RobinGreen Most things I expect him to do would be net negative. The only advantage he has is a shit ton of money, but any apparatus he tries to set up here will surely be goodhearted to oblivion, and because improving capabilities is easier than advancing alignment, and he’ll likely at least be able to distinguish between project speed related to vs unrelated to AI, he will likely inadvertently end up funding capabilities work even if he’s more concerned about existential risk than woke AI.
@AlexAmadori Starting yet another AGI lab would likely be ranked as Very Bad by Eliezer standards, and if Elon's main concern about OpenAI is that its too woke, then certainly he hasn't learned anything new.
@GarrettBaker i may be missing something cause I didn't want to sign up to the news letter. i agree he doesn't seem to have learned anything new
@tom I saw this, but thanks anyway. The tweet however, made me slightly update in the YES direction.