126
55k
Jun 21
86%
Eradicate all mosquito-born diseases globally, immediately (NO) -OR- Design and implement a (well-received) AI safety plan to dramatically lower x-risk (YES)
36%
Constantly feel like you have to sneeze (NO) -OR- Have hiccups for the rest of your life (YES)

Each answer contains a dilemma: Would you rather pick the first option (NO) or the second option (YES)? Bet NO, or YES, according to your opinion. 1 person = 1 vote (per answer), so having more shares does not make your vote count for more.

Every week, the market will close.

If an answer has a clear majority of YES holders, that answer will resolve YES.

If an answer has a clear majority of NO holders, that answer will resolve NO.

If it's very close, and votes are still coming in, the option will remain un-resolved.

The market will then re-open for new submissions, with a new close date the next week. This continues as long as the market is worth running.


It does not matter what % the market is at, and bots holding positions are also counted.

Some guidlines:

  • I encourage you not to bet options to extremes (1% or 99%) before a quite clear majority has been established. Otherwise, it prevents others from betting toward that extreme, and can bias the results.

I may update these exact criteria to better match the spirit of the question if anyone has good suggestions, so please leave a comment (and ping me) if you do.

See also:

Get Ṁ600 play money
Sort by:

25v19 is uh not statistically significant majority because under the null hypothesis of cointoss there's a 44% chance of getting a score around that extreme, according to chatgpt. I'm realizing a lot of these "clear majority" votes aren't that clear after all... uh, presumably that's assumed to be the case and fine to people. I was wondering if it was a clear majority, thought to run this test for fun, and now I'm realizing maybe i should ignore the test and make up my mind some other way, but I guess I will keep it open and next week it resolves if there's a diff of 5+. Unless someone convinces me before then to do otherwise. also there are ongoing conversations about it, which are fun.

also yall gotta decide yourselves on hiccups and sneezes

Eradicate all mosquito-born diseases globally, immediately (NO) -OR- Design and implement a (well-received) AI safety plan to dramatically lower x-risk (YES)

I'm still curious to hear from Yes voters here - you can eradicate ALL mosquito-born diseases globally, immediately, or take a chance on being a person whose plan for AI safety/risk reduction possibly works over time.

any answers on the choice on the coinflip instead of the guaranteed elimination?

the +ev is what matters more than the lack of risk

Which is to say that guaranteed benefit has a slight risklessness premium but overall the reason for YES is that it has the chance to help way more people / sentient folks, under my model of the future

thanks for the answer

To elaborate on @Bayesian 's point, the scale of the difference in effect is more obviously vast if you look at the actual numbers. Helping reduce the estimated ~15% risk of ASI accidentally destroying civilisation or triggering indefinite dystopia this century would potentially affect >10^38 (a hundred undecillion, 100,000,000,000,000,000,000,000,000,000,000,000,000) beings over a period of >10^10 (ten billion, 10,000,000,000) years

https://80000hours.org/problem-profiles/artificial-intelligence/

https://reducing-suffering.org/altruists-focus-reducing-short-term-far-future-suffering  

I still really disagree with the choice but thanks for elaborating

@shankypanky im a NO buyer even if guaranteed elimination

🧡

where do you stand on AI risk?

I think that AI risk is a risk in the same way that nuclear weapons or a synthetic pathogen is a risk. I agree that it's something that should be focused on more and that the mean outcomes could potentially be quite bad, but I think that it's cringe + people are overplaying their hand when they insist that the median outcomes are also horrible + "we probably won't live to 2100" or whatever. If you wanted me to give an EV for AI death toll in next 100 years I would probably say on the order of 10-50M heavily skewed towards the tails.

Additionally, if ASI is possible like claimed, which I am not totally sure of, I heavily distrust the assumption that it could be in-principle controlled given enough time. In principle, we can't construct a way to compute the Busy Beaver function for all n. There are also some pretty weird ethical implications on effectively enslaving a presumably sentient ASI

What about guaranteed torturous s-risk dystopia though? That could also result from failed superalignment

Would you also feel differently if the option was for successful AI governance to prevent ASI from being built in the first place?

I just fundamentally disagree with the framework of this kind of rationalist longtermist argument. It feels like we're doing -- I really wish there was some name for it -- some weird EV manipulation here. Like let's say that I ask you to give me 100 bucks and that I really need it, and I super promise that if you give me the 100, I will grant you eternal life, Rayo(123456) dollars in return, your personal neutron star, and a pet ASI that will not betray you or do anything sketchy.

So obviously I'm bullshitting, but even if there's a 10^-9999999999999999% chance that I'm not... all I'm saying is give me the 100?

With this type of reasoning, I fail to see why most rationalists aren't religious, because hell is like, pretty much the OG s-risk. Eternal torment beyond your wildest imagination and all that

@TheAllMemeingEye RE: preventing ASI/for eliminating AI risk, that option is more preferable to me. That said, I still don't value it more than eliminating all mosquito-borne diseases

Haha fair enough, the Pascal's Wager/Mugging counterargument is a pretty common one, and to an extent I agree with it, since my own donations are split roughly equally between near-term human welfare, near-term animal welfare, long-term risks, and meta-altruism

I do however think the very much not infinitesimal probability of the risks, as discussed in the 80k link I posted, does at least slightly set this above supernatural religion

I think there was a really nice GiveWell blog post discussing this, I'll try and find it

I think evidential decision theory is wrong and as such I would not give in to this baseless threat! Also this breaks depending on the positive scale of your utility function (maybe you think goodness only goes so far). But if my utility function was linear, and I knew the probability was tiny like that but the EV was massive, I would of course make this trade. I think someone that wouldn't under these assumptions has a strange aversion to making highly +ev trades. Alas, irl I would refuse that threat, and not really consider the plausibility of it worth any brain-cycles bc utility isn't linear, but I sure would value a lot reducing the risk of losing out on value of a cosmic scale, if I'm not set up against an agentic opponent that gave me this problem to fix knowing what i would do, or something
(this was not most persuasively said)

Sure, if you find it I'll read it. I also have some philosophical disagreements w/ longtermism in specific and most strains of utilitarianism in general.

I think it would be fair to say that the longtermist philosophy, if presented with a choice between 100% thriving planetary utopia and 99.9% 1984 world 0.1% thriving intergalactic utopia, longtermism would have to choose the latter, because both a planetary 1984 world and planetary utopia, and indeed the differences between them would, when zoomed out against an intergalactic utopia, seem like nothing. Note that I'm not saying that you or any one specific longtermist would necessarily prefer this choice, just that that's where I think the logical end of that type of ethical system is.

I find it easy to cross-compare one person with themself in an alternate situation, hard to add utility when it comes to multiple people feeling a certain way, and impossible to compare a qualia-filled experience to the void. I think we can cross-compare Actual Future A and Actual Future B. I don't think you can cross compare Actual Future A with The Void. Or in other words, if you know anything about combinatorial game theory https://en.wikipedia.org/wiki/Star_(game_theory), I think that nonexistence would have to have a value which is confused with all possible real values + if there's a utility function, perhaps the utility of a certain set could be fuzzy over a certain area.

I also find it hard to reconcile the idea that longtermists are trying to do moral math over, what was it, like 10^38 years you said, with the idea of any real utility function. Put simply, actions have effects, and if you assume their effect over time, if time approaches infinity you would expect them to eventually fizzle out to 1 : 1. That, or you'd have to show that this effect actually approaches +/- infinity in terms of value over time approaching infinity. Both of which are kinda weird.

Whether your utility function is linear or not, if there's a 10^-9999999999999999% chance that I'm not lying, that's still probably a massive +EV trade. Eternal life is, of course, eternal, and Rayo(n) probably grows a lot quicker than your EV function shrinks

I would probably have to write a lot more 9s there, but just take -9999999999999999 as "I would have written a lot more 9s"

seems hard to estimate these probabilities, and I don't want to be exploited by an agent that offers me seemingly +ev deals to make money, so I refuse

@Bayesian I agree, which is why I usually reject longtermist arguments w.r.t. AI.

The difference is that AI is not another agent exploiting our niceness to take our resources to use for AI when it could have been used for something else. So the decision theoretic implications do not apply

It seems hard to estimate the probabilities of ASI hyper trillion year risk from where I stand, + I don't want to be exploited by an agent that offers me seemingly +EV deals (dropping everything to hyper-focus on AI issues) regardless if they're doing it for money or out of genuine belief, so I refuse

I mean yeah, if you think people are trying to convince you to get your money or support, don't be convinced by them. Make up your own mind, and know that it's a potentially important epistemic question despite clearly being pretty hard to estimate. If someone tells you you should donate to their lab bc they'll reduce your risk of infdeath, that's not a trade you should make-bc-they-offered-it. But if you would have done it by choice had they not offered it, "not responding to threats" doesn't mean "act meaner to the person than you would have treated them had they not made a threat", so doing it may still be fine, unless you think they are omega-like.. if that makes sense?

Basically, fair, your reason is good for not responding to someone's argument by sending them 100$. It's still not decision-theoretically bad to give them 100$ if you would have done it had they not made the threat. And considering the non-near-zero credence you ascribe to it without anyone making-the-case-to-you-as-a-threat, it's worth some occasional brain-cycles to consider your well-considered subjective probabilities about it, and the EV of different actions, in a non-exploitable way.

Well I think if we're adding the axiom that

"if I would have only made decision X with statement A, ignore EV value unless..." we're suddenly factoring in something that isn't purely estimated EV. Which was not my point with this specific conversation, so I won't pretend like it was. However, it's something that I broadly agree with.

On the "exploited" part, I just chose that word because you used it. The more proper kind of phrasing for my actual stance would be something more like "I don't think we should get deathtrapped by seemingly +EV deals that are very slanted towards the tails." Which I don't think is unreasonable to see happening when some people are making arguments like "climate change isn't a big deal because ASI is obviously the more pressing, immediate issue. Either we figure it out and climate change is solved by our AI friend, or we don't and ASI kills us in a way more painful way anyways"

We must ignore EV, or else we can get exploited by agents that know this about us. This is a hugely important distinction between your analogy and the situation with AI x-risk. I think that in Pascal's Mugging, you should not give in to a threat. But in modified-pascal's-mugging-where-nobody-is-threatening-you, you would just not trust your reasoning's validity when the probabilities involved are this tiny. That's a common response to Pascal's Mugging: Sounds fishy, blindly multiplying tiny probabilities with hugely positive tail-outcomes, I don't trust this kind of +ev suspicious scenario, so I refuse.

This analogy doesn't apply to reasoning about existential risk, I think, because the probabilities aren't 1/1000000000 or something. They might be 5%, they might be 40%, they might be 90%. Most people, if they have a 5% chance of dying from cancer if they continue smoking, that's reason enough that they ought to stop smoking. I think that generally you don't need to think about the huge upsides, or the minuscule probabilities, because those are suspicious and not so trustworthy lines of reasoning. The medium-sized probabilities, and the individual-humans-scale upsides, are stuff we deal with in everyday life. I might cross the street absentmindedly when i don't hear a car, but I would rather take the time to check both ways. stuff like that.

I think climate change is a big deal, I don't expect it to stay a big deal because I put significant probability on ASI-like scenarios, but not 100% (it might end up being an actual terminal-kind-of-big-deal, and I don't think slowing down climate change is a waste, or even a bad investment. I don't think it's the best investment, but few things are.

I think that's maybe why we were talking past each other in the last 3 or 4 comments or so. I would simply dispute the premise about existential risk. I think 90% is a massive overshot, 40% is a massive overshot, and 5% is... a pretty big but I don't know if I would say "massive" overshot. If you were have to make me assign a probability to any sort of ASI, it would probably be 2 orders of magnitude below that, and ASI s-risk stuff would go a lot below that.

I still think that AI risk should be taken care of, because just because (from my POV) it would take a lot which we're just assuming to get to ASI, it doesn't mean that AI can't present a very real near term threat from epistemic failure, AI getting really good at decryption, catastrophic malfunction, etc.

I'm curious as to your reasoning for thinking climate change is a big deal, because while I agree (and in my opinion it is the issue), I don't see how it would follow under your system. If you believe there's a 40% chance of ASI which can roughly be divided into heaven and hell scenarios, why would climate change if you're just using an EV calculation, or any other human-scale event, not just fade into the background?

Climate change is a big deal, because in the worlds where ASI ends up not being a big deal for any reason, Climate change has the risk of causing a lot of damage to the entire world? Suppose I think there's a 30% chance that ASI doesn't happen this century, because people stop it or because it ends up not being realistically engineerable, then 30% means highly impactful ways to fight climate change might end up being really really good deals. They will probably be worse deals than the best deals to reduce AI x-risk, but they're still more worthwhile than, like, almost everything we currently spend our collective resources on as a society.

I misunderstood your ASI credence, so that's my bad. I thought you were in the ~5% x-risk this century approximate range. if you think it's ~0.03%, then yeah, I would not tell you under your credence you should support ASI x-risk mitigation. I think that is very low, and I don't think the models of ASI that give this low of a credence are plausible so that's something worth talking about maybe, but yeah under your model of course you would vote NO. nearer-term risks from AI are also plausibly going to be pretty bad. I think they aren't that much more likely than the more extreme and slightly farther and much worse risks, so I focus more on the farther term risks, but many things that help with one of those risks helps with the other, and yeah lots of opportunities there under my model

I agree with all the reasoning of the first paragraph from my framework, but perhaps you can elucidate what makes it true from yours? Perhaps I'm misunderstanding something fundamentally, but let's take that 30%.

So you believe in 70% ASI. Let's say that 25% (additive) of those scenarios are heaven scenarios, 25% are hell scenarios, and 20% the ASI flies into a black hole or something. I don't know if you would assign those odds but I think most odds work in concept.

So we're left with 25% +1000000 utils, 25% -1000000 utils, 50% 0 utils. If ignoring climate change in favor of putting all our eggs in the AI-risk basket changes that to 26% +1000000, 24% -1000000, 50% -100, shouldn't we from a pure EV scenario still prefer that? I think that one of four things has to be true: AI heaven/hell scenarios aren't as good or bad as we think they are here, climate change suffering is much worse than we think it is and on the order of an s-risk, we have some second non-EV criteria, or ASI odds are lower than postulated here in order to defend that conclusion.

I'm in the ~5-10% AI non-x still-risk camp. My concerns RE: AI are the catastrophic-but-not-s-risk bucket mostly. I can't really think of that good of a hypothetical, but think of AI failure shorting the electricity grid. That kind of thing. Would be also down to chat about ~0.03% or 5% or 70%; this conversation has been pretty fun.

I'll start with rephrasing my claim with fake numbers because that seems to not have been clear. I think ASI anti-x-risk research is strictly more valuable on average than the average fighting-climate-change research. If I had 1$ to spend i would spend it on the AI one. If I had 10 trillion, though, I would spend some on each. Plausibly more on the ASI one, but for all I know that one is easy to fix if you have 1 trillion, and the other one needs 9 trillion or it doesn't work at all. The details are uncertain, but the marginal dollar, under my model, goes farther for AI risk mitigation.

That, however, does not make Climate Change "not a big deal" or not important or not a good funding opportunity. It's a worse funding opportunity than ASI x-risk, on the margin, but it's definitely way better than a lot of other things.

I do not trust myself to correctly reason about tiny changes in underlying probability or weighing tiny changes immensely because of immense stakes. I take the chance when there's no other option, but I wouldn't use the reasoning about 25% to 26%, or stuff like that, if there's anything better, or a decent way to manage risk.

I also think the convo has been fun! What thing stops ASI from coming within 20 years, under the most plausible scenario(s)? Is the step to AGI what stops you? or the step from AGI to ASI? Or something else? or both of those are insurmountable steps? why do you think what you think, ig?

Okay, here's another one. I think uncertainty is a fair defense, although I would ask based on that if you value purely EV, or (it seems like) some combination of EV and median outcome?

On that topic to really test the EV stuff, let's use another hypothetical. Someone offers you 1 million dollars that can ONLY be used towards funding one topic, whether it be AI risk or climate change or whatever. They offer you a choice: you can either take the money and send it to that topic, or you can flip a weighted coin which is 51 H 49 T. If it flips H, your money is doubled. If it flips T, all your money is dead. It seems like the EV approach here would be to flip the coin infinitely, or if we're using more human scale numbers (quadrillions of dollars probably doesn't do much additionally since there's only so much human labor you can assign), at least 25 times?

If we're using utility instead of money and they give you 1 util to spread to the world, and I admit that I'm on shaky grounds to give this hypothetical because I'm not even a utilitarian -- but utils are presumably uncapped, and they gave you the same coin, from a pure EV standpoint wouldn't you flip the coin infinitely? Which would always eventually lead to a 0.

On the AGI ASI stuff, first AGI. I think that Narrow AI can be dangerous and AGI could even potentially be dangerous, but I don't think the emergence from Narrow AI to AGI in the next 50 years is likely because we don't really have a complete theory of mind and it seems really difficult for an AGI to emerge because of that. It would be like a 1200s biologist accidentally creating Ebola++. Well, maybe they were even intending to make Ebola++, but don't have germ theory or anything like that. Additionally, let's not even talk about LLMs because I don't think that's gonna get us to AGI, let's talk about paradigm shifts. Even if some paradigm shift happens, we're assuming that consciousness is Turing-computable here, which has some funny implications. Additionally additionally, if we're talking about some iterative intelligence explosion situation, it is to my understanding that the threshold has to be above human intelligence. The approach would be rather slow, so I find it hard to see AGI taking us by surprise.

On ASI, most ASI scenarios I've heard put ASI to humans on the same relation as humans to ants... in the same way that I meant 10^-9999999999999 when talking about Rayo(123456). They, in the way I've heard them, usually put a much larger gap but just use that as an example and say "assume that like 1 million times, but 1 million actually means a much larger number than that because I'm lazy." I think there are a ton of assumptions that are made there that intelligence will just keep going up like that, but for now I'll give this one: I think it assumes either that P = NP, or that the processes necessary for that level of intelligence wouldn't be something NP-hard or undecidable or something.

Regarding specific hypotheticals for AI (non-x, non-s) catastrophic risk, would you say scenarios like those mentioned in my AI war crimes market description are of the level you mean?

Yeah, that's about correct (although a bit on the low and recent end)

@TheAllMemeingEye I think the most plausible worst-case scenario before 2050 is that one country makes a major breakthrough in some type of very sophisticated Narrow AI or AGI that proves very, very, very useful for war + invalidates MAD + takes very little commitment from that nation to use, that country basically takes over the world, and they use AI + surveillance. In that scenario you'd get a global value lock-in with values with very undesirable values. Like, 1984 scenario

I think this is sorta the tail tail risk of what I think is plausible before 2050 in the worst case scenario. I would class it as somewhere from slightly better to slightly worse than nuclear eradication