In April 2022, Eliezer Yudkowsky published Less Wrong post entitled MIRI announces new "Death With Dignity" strategy. The post declared, in a kidding-not-kidding style, that humanity would not solve the alignment problem, and humanity would soon be extinguished by unfriendly AI.
This option will resolve as "yes" if Scott Alexander publishes a post in a similar vein, namely
The post declares that humanity is doomed, and that doom will be coming quickly
The post is written either earnestly, or it is written in a style that leaves the reader uncomfortably unsure if Alexander is earnest.
Themes of doom and futility should be a central conciet of the post. (A short parenthetical in an otherwise unrelated post would not suffice)
For the purpose of this bet a "post" will include literary media outside of the blogosphere. For example, an oral address or a chapter in a nonfiction book.
Off the cuff comments, such as a reply to a reader in the comments section of his blog, will not count as a "post."
Fictional work by Scott Alexander might fullfill this option, if the work as a whole makes readers suspect Alexander is an AI doomer. But if the overall arc of the story is anti-doomer, then I will not count it as a doomer post.
Added 12/5: If Alexander tries to put some polyanna spin on world doom, then option will still resolve as 'yes'. Examples of polyanna spin would include "Yes, were are p > .95 all going to die next year, but think about how sweet that .05 probability where we survive this whole mess is going to be" or "Yes we're all going to die soon, but isn't it cool that we get to see the most significant event in human history?"
This point of this option is to predict whether Alexander will predict that we all die, not whether Alexander will try to cheer us up.
Would something with the style of The Hour I First Believed (https://slatestarcodex.com/2018/04/01/the-hour-i-first-believed/) count as "unsure if Scott is earnest", or would it need to be more earnest than that?
@Multicore At the end of the post there is a parenthetical explaining that everything above the parenthetical is not quite what he actually believes. So if Scott were to write a blog post with a "Here's all the reasons we are doomed" and then follow it up with an "actually I don't believe any of this stuff", then I would not count it as earnest.
...unless the disclaimer itself were written in such a way that we would doubt that the disclaimer of earnestness was itself disearnest. For example, in EY's "Death with Dignity" post, he acknowledges that the post was made on April 1st, and then says "Only you can decide whether to live in one mental world or the other." This makes reader unsure if the disclaimer of earnestness was itself disearnest; i.e. Eliezer is just kidding about whether he is "just kidding".
When in doubt, condense it to this rule "If, after reading the entire post, the average reader will strongly suspect (p > .6) that Scott Alexander believes the world is doomed, then option will resolve as 'yes.'"
I think Scott Alexander is more similar to Scott Aaronson than Eliezer Yudkowsky. And I think Scott Aaronson has made it very clear he won't be writing a "we're doomed" post: https://scottaaronson.blog/?p=6823
So I don't think Scott Alexander will either.
(Also, I have very high confidence that Scott Alexander would only write such a post if we were somehow actually, unambiguously doomed. In which case mana is useless and so predicting YES here can't meaningfully pay out.)
@WilliamEhlhardt Good question. I'm still agonizing about the question of what actually is my own probability of AI doom this decade. All I've decided so far is that it's somewhere below 10% -- which isn't saying much because if it were anywhere near that upper bound then that's a massive and terrifying probability for an existential risk.
Also, I kind of don't like this market's operationalization of the question and may just cash out of it. So, free money notice, especially for those trading for expected mana maximization rather than expressing personal probabilities.