Resolution criteria:
This market resolves YES if there are at least three citations posted in the comments of (at least 3 different) individuals calling for bans on any open source LLMs or open weight LLMs like Llama-2, and all of the individuals in question are members of the "doomer" Rationalist sub-subculture (see definition below). This market resolves NO otherwise.
A statement calling for all models to be banned doesn't count. The cited statements must be specifically calling for the ban of one or more open source models, or for a ban on open source models in general.
Definition of "doomer" and notability criteria:
For comments that seem promising, I will create a poll for them myself for judging whether the individual in question is a "doomer" or not. The options of the poll will be (1) "Yes, a doomer", (2) "No, not a doomer", and (3) "I've never heard of this person". For the sake of notability, option (2) cannot score higher than option (1), and the sum of (1) and (2) must be at least as great as (3).
Polling examples:
/singer/is-jeffrey-ladish-a-doomer
/singer/is-zvi-mowshowitz-a-doomer
/singer/is-eliezer-yudkowsky-a-doomer (calibration)
So far the polling seems to be indicating that Zvi Mowshowitz is a positive example. This would put the doomer quota at 1/3 doomers so far.
So far the polling seems to be indicating that Zvi Mowshowitz is a positive example. This would put the doomer quota at 1/3 doomers so far.
Jeffrey Ladish: "We can't go back to a pre-LLaMA world. But we can prevent the release of a LLaMA 2! We need government action on this asap"
https://twitter.com/JeffLadish/status/1654319741501333504
Note: he's since refined his views; he now thinks "we should require rigorous capability and mitigation evaluations, and allow people to release [weights] on the basis of the resulting risk assessments."
@derikk This might count. I'll use it to calibrate my "is notable"/"is a doomer" test. /singer/is-jeffrey-ladish-a-doomer
@singer Jeffrey is definitely a doomer who's more on the notable side (he's executive director of Palisade Research and one of the bigger players in the policy sphere). I think the fact that your poll had more "I don't know this person" is more reflective of the ignorance of Manifold randos and the way this is a niche topic than it's a sign that he's too obscure to be a verifiable doomer.
@MaxHarms I agree that my bar for notability was too high. Instead of changing the resolution criteria, I'll probably go with your suggestion of adding the word "famous" to the title. IMO that's more fair to people who made early bets. If you disagree I'd appreciate hearing your reasoning.
https://twitter.com/TheZvi/status/1734924429321187629
I believe this constitutes Zvi specifically calling out open source models as fundamentally unsafe and needing to be banned.
@MaxHarms Is this the part you're thinking of?
I believe that such a regime will automatically be a de facto ban on sufficiently capable open source frontier models. Alignment of such models is impossible; it can easily be undone. The only way to prevent misuse of an open source model is for the model to lack the necessary underlying capabilities.
@singer Yes, that's right. There (and in his other writing) he has said that open source models are impossible to align and that he supports regulation to prevent misuse and thus effectively wants a ban on open source models.
@MaxHarms What do you think about the implicit notability clause (last part)?
The options of the poll will be (1) "Yes, a doomer", (2) "No, not a doomer", and (3) "I've never heard of this person". Option (2) cannot score higher than option (1), and the sum of (1) and (2) must be at least as great as (3)
I anticipated that it would be uninteresting and too easy to hit the quota of 3 individuals if any person was admitted. EDIT: this isn't intended as a jab, I really want your opinion on if this is a good measure
@singer I think it makes the resolution really dependent on the population you end up polling. For instance, I'm pretty well known within the rationalist sphere, but am hardly famous broadly. I don't know how many people on Manifold know me. I think if you keep it as such you should rename the market "≥3 famous doomers want bans on open source LLMs (in particular) will be posted in the comments"
@MaxHarms I think maybe a better resolution criteria would be "must have established record of having been a doomer for at least 3 years". Or something to that effect.
@singer I wonder how many doomers exist such that more people polled on manifold would say they know them rather than don't know them. My guess is about a half dozen? (Yudkowsky, Scott Alexander, Zvi, Nate Soares?, Stewart Russell?, Max Tegmark?) Being as deep as I am in the doomerspace it's hard for me to make sense of who counts. Like, Dario Amodei doesn't, right? Paul Christiano??
"must have established record of having been a doomer for at least 3 years"
Do you think this could be defined with a poll of some sort? I'm trying to think of how to operationalize this.
@singer I mean, at the end of the day it's a judgment call, whether it's you or a crowd deciding. I'd say that the person should have a public record indicating that they're very worried AI is going to kill everyone and/or be working specifically on reducing the risk of AI xRisk that's at least 3 years old. If it's ambiguous I guess you could poll people...
@MaxHarms Thanks for your suggestions. I might make a "Who are the top 10 doomers" poll in the style of this one: /singer/who-are-the-top-10-ai-researchers-a and use that as the candidate list.
@singer As in, only people in the top 10 count, and only if they specifically want to ban open source?
@MaxHarms Maybe. I'd have a better idea after seeing the poll results. I'm not very knowledgeable about the doomer community and want to figure out their stances (that was the reason for creating this market).
I'm not very knowledgeable about the doomer community and want to figure out their stances (that was the reason for creating this market).
Here's my understanding of the stance of the community of people who take existential risk from AI as a serious thing that has a non-trivial chance of happening: 1. Current models aren't existentially risky in that way, or else we'd all likely be dead by now. 2. But the next generation of models might be, the size and compute involved in training frontier models is going up quickly. 3. And open sourcing a model that might have the capability to kill us all will significantly increase the chance we all die, because any safeguards we can put in are easy to take out if you have the model weights and can run a copy on your own hardware. Conclusion: Open sourcing current models is probably safe-ish (except that it may increase the rate of AI development, when probably it would be better if that slowed down while we work on getting better at safety measures), but open sourcing any future models is quite risky and should only be done after a level of safety testing we don't yet know how to do, more research needed.
@MaxHarms There has to be a citation of them calling specifically for open source models to be banned. For the sake of this question, calling for all models to be banned doesn't count. Thanks for bringing this up, I'll clarify the criteria.
@singer Right, so even if Yudkowsky thinks open source models are a terrible idea, if he hasn't called for them and only them to be banned it doesn't count, even if he's on record as thinking they're a terrible idea, and worse than closed models? (Just checking.)
@MaxHarms Open source models have to be singled out in the statement for the ban. The individual might have made other statements to the effect that all models should be banned, but the cited statement needs to be about open source models specifically.
@singer @MaxHarms What about this statement from Yudkowsky on the Lex Fridman podcast in 2023?
"But open sourcing it? No, that's just sheer catastrophe. The whole notion of open sourcing this was always the wrong approach, the wrong ideal. There are places in the world where open source is a noble ideal, and building stuff you don't understand that is difficult to control that where, if you could align it, it would take time? You'd have to spend a bunch of time doing it? That is not a place for open source, because then you just have powerful things that go straight out the gate without anybody having had the time to have them not kill everyone."
@Clark I'll think about this and listen to what others have to say, but my first inclination is that this is "merely" condemning open source models. There isn't a call for regulatory action or government interference mentioned, even if it's strongly suggested.
@singer I would agree with that. It seems like it should be easy to find from EY! May search for more later.