Will Eliezer Yudkowsky go on the DarkHorse Podcast by 2025?
Dec 31

Eliezer Yudkowsky described scenario with AI-created brain-altering virus.

Bret Weinstein expressed scepticism.
There was back-and-forth on Twitter.
Bret expressed willingness to discuss the subject:

and later reiterated (9:45 & 10:55) it during his podcast.
Will Eliezer go on the DarkHorse Podcast by 2025 to discuss contentious topic?

Get Ṁ600 play money
Sort by:

I bought YES, but part of my brain is telling me Eliezer is done with podcasts for a while.

I think he aspires to stop doing things which don't seem to be helpful or seem to have sharply diminishing returns. Has he said what he wants to say already? Does he think going on DarkHorse will have any substantial positive impact?

Personally, I think it might have a positive impact, so I'm not willing to bet <20% that Eliezer comes to the same conclusion at some point this year and just sits down for an hour or two to talk to Bret and Heather.

Specifically, Bret and Heather are evolutionary biologists - and if Eliezer's framing of AI/mind-design/alignment being analogous in some ways to evolution as an optimization process is going to work on someone, it would be them.

How willing would they be to change their minds on AI Doom...? Well, maybe if they change their minds re Lab Leak, after the Peter/RootClaim/Scott stuff - then Eliezer might conclude "ah, so these people are somewhat sane after all."

I also get a sense that the disagreement we mind control viruses felt like a specific kind of anomaly to Eliezer which he might not want to touch. What do you do when someone has said they've concluded that something is actually impossible? If you take them at their word, no amount of argument/evidence is going to dissuade them of that. Perhaps it was just about definitions? Bret thinks: "I know what a virus is, I know that viruses can be made to target specific cell types, I do not see how you can make a human anomalously persuadable by introducing a genetic payload to a whole brain which makes them persuadable and have that actually be transmissible and all of the detailed code remain sufficiently unmodified as the virus spreads through the population."

And Eliezer would reply something like: "Okay what about specific scenario X, where you could have genetic material in a chromosome-like structure with fewer copying errors. What rules out an engineered virus from just totally bypassing degradation of its non-adaptive functionality via natural selection during spread?"

And Bret goes "I don't think you can engineer anything that spreads better than a virus actively adapting to do that, especially if you need the engineered virus-thing to carry a huge payload to the brain."

And Eliezer replies "Okay, if we were in the same room I would now ask you to promise not to tell anyone even a hint of the content of a sheet of paper I would write out and then pass to you along with a lighter to burn said sheet of paper after reading it - but we aren't in the same room and I don't trust the 'series of tubes' connecting us... but if I say I have a reply which takes that form, are you going to sit with that frame of mind and consider that there might be specific things I think you can actually do with AI which would falsify some of your categoric denials - such that I cannot safely share the evidence I have, but would ask you to try to at least regenerate the frame of mind which led me to consider those ideas possible and discover them for myself?"


Y'know, this scenario wouldn't actually happen the way I describe, but my internal version of Eliezer and my internal version of Bret do seem to do a lot of almost barely productive but quite heated arguing.

I am pretty sure that this question rests on what Eliezer thinks of the idea - I don't think Bret would turn him down for a conversation.

More related questions