Change My Mind - @Bayesian's Megamarket
8
58
175
Jun 16
60%
I'll change my mind about: "The most natural and probable outcome for a civilization like humankind is for it to end within a few millennia with a Singleton, a single entity that has ~complete control over everything."
50%
I'll change my mind about :"Resolving an option NO when I'm sufficiently confident my mind won't change is the best potential solution I'll come to know before June 16."
48%
I'll change my mind about: "Manifold's value as a prediction market will be clearer when market creators are consistently willing to pay $ for information, rather than receive $ for getting traders."
25%
I'll change my mind about :"The Superalignment team's leads resigning is a really grim fact about OpenAI and AI safety."

For ``` I'll change my mind about: "X" ``` options:

Options resolve YES if my mind was changed, either within this comment section or outside of it. "Changing my mind" doesn't require flipping from definite agreement to definite disagreement, but a gotcha about how something has a slight asterisk I hadn't noticed is not enough. If ambiguous, may, but rarely will, resolve to a percentage.

Resolves NO if/when I am sufficiently confident that my mind will not be changed in the next months such that leaving the option open would be overly distortionary bc of interest rates and such. There may be a better way to settle an option NO. I welcome anyone to propose a better alternative.

For ```Stronger (YES) or Weaker (NO) belief in X``` options:

Resolves YES if I end up having a significantly higher credence in the proposition represented by the letter X being true, and resolves NO if I have a significantly lower credence of the said proposition.

I won't bet on an option unless it's within a minute of its creation or its resolution. (I'll also try my best to respect things like, not buying through a limit order when this happens)

Get Ṁ600 play money
Sort by:
I'll change my mind about: "Manifold's value as a prediction market will be clearer when market creators are consistently willing to pay $ for information, rather than receive $ for getting traders."
opened a Ṁ20 I'll change my mind ... YES at 52% order

I think there are excellent first-principles reasons to want trading fees to go to market creators. If it would be interesting I might do a write-up on this.

It would be interesting to me!

I'll change my mind about: "The most natural and probable outcome for a civilization like humankind is for it to end within a few millennia with a Singleton, a single entity that has ~complete control over everything."

This seems pretty weird to me. Control is hard and needs to be meticulously maintained, while chaos is the natural state! It is a bit like betting against the second law of thermodynamics.

Exhibit A: North Korea-style dictatorships still fail to control a lot of stuff, like people watching South Korean movies https://www.voanews.com/a/in-video-north-korea-teens-get-12-years-hard-labor-for-watching-k-pop-/7448111.html.

Exhibit B: Even though we (humans) are in many sense much more powerful than slugs, cockroaches or ants, we cannot completely control them in any meaningful sense even on relatively small scales (ask anyone who's tried to get their house/garden rid of those things).

@MartinModrak I am not the market creator, but there's a tweet from Eliezer on your insect metaphor which I think is highly relevant: https://twitter.com/ESYudkowsky/status/1790050422251761969 (skip to Problem Two)

Control is hard, but the optimization power of individual humans is limited, we can't modify our own neural architecture, and trying to scale organizations of multiple humans runs into massive inefficiencies. An artificial superintelligence with no need to waste time in various ways, much faster thinking, and the ability to develop new algorithms on the fly and modify itself as needed would have much more optimization power.

@MartinModrak I am skeptical of arguments like ‘the 2nd law of thermodynamics is a robust statistical law’ to ‘multiagent equilibria get more and more chaotic’ but even if I did, I think a more chaotic multiagent setup is more likely to end with a singleton, bc it’s a stable endstate that is hard to get out of, unlike the situation where all the agents are competing on tech that might at any point give one of them a decisive advantage. What am I missing?

@Bayesian I think you are making a tacit assumption that AI capabilities are likely to improve beyond all limits in all regards, making the AI basically a god. I think that's wrong - both the AI's pure intelligence and its ability to exert control over physical space will quite likely run into limits and I would bet those are substantial. Under a wide range of such limits, a set of (semi-)autonomous agents can easily prove more efficient and resilient than a single entity.

This also allows some pathways out of the singleton state - if the singleton still needs to provide some autonomy to subunits, it is plausible that some of those units will manage to increase their autonomy (in a weak analogy to cancer).

Another, distinct line of problems is your definition of "everything" - there are many things which are intrinsically hard to control (weather, space weather, bacteria, ...) are you excluding those?

@MartinModrak Not beyond all limits, not a god. There are physical reality limits, there are complexity theoretic limits, stuff like that, and those things don’t seem very limiting in absolute terms, so I don’t see these limits preventing an AI from passing, for example, the manufacturing and technological capcity of a country, or all countries, or all countries ten times over, while being a distributed system that cannot realistically be destroyed. I’m curious why you think the limits would be anywhere close to our current power level.

I think if subunits of the singleton are just programs / other ais, they can be error corrected and have lots of fancy math that makes cancer statisticslly impossible in a similar way to how decreasing entropy is impossible.

And yeah, in this case control everything is one where controling nature doesn’t matter as long as other agents cannot realistically act against you

@Bayesian The limits I have in mind are more of the engineering type: making more efficient computers/turbines/photosynthesis/steel production... I find it completely plausible that in all of those examples the things we see currently in the world are within an order of magnitude or less to the maximum for any practical system (with the exception of computing where 2 order of magnitudes improvement seems not so far fetched).

I do have to admi that upon closer inspection the connection between those engineering limits and the limits on singleton power is weaker than I initially thought.

One extra line of attack is that the speed of improvement in AI (or really anything) isn't limited just by the actor getting better at reaching a next level, but also by the increase in difficulty in reaching the next level. So by tweaking the difficulty curve you can get the (currently still quite hypothetical) AI takeoff to have any shape you like from exponential (the increase in difficulty between successive "levels" is negligible compared to the increase in skill) to linear (increase in difficulty comparable to increase in skill) to basically flat (the increase in difficulty is exponential with larger base than the increase in skill). There's very little information that can be used to guess the relative shape of the improvemnt/difficulty curves (though exponential increases in difficulty are definitely not implausible).

So I don't think you can flatly rule out the possibility of AI never really being that much smarter than a human, which would preclude singleton control over the whole planet - anything at very large scale would have to be performed by groups of AIs putting limits on central control vaguely similar to the limits of central control individual humans can wield.

@MartinModrak

by tweaking the difficulty curve you can get the (currently still quite hypothetical) AI takeoff to have any shape you like

Completely agree there! I think our world is one that appears more likely under some hypotheses than others, and certainly it will ultimately be some kind of sigmoid because power concentration cannot go to infinity. My guess is that it’s pretty exponential for a long time, computers are quite a few ooms away from theoretical power and efficiency limits iirc, but also they can be stacked into a planetary supercomputer if u want to keep scaling, and it’s harder to see a limit on that end. I do think on the last point of that paragraph, that we are drowned with information and the difficulty is in using it correctly.

And yeah to your last point, agreed we cant preclude jt witj certainly like everything else, for the purpose of this market if I am pretty confident it’s possible then id need to be confident it isn’t. Im concerned that too few of these will resolve positively and people will be annoyef so I might put things I’m less confident in. I think it’s plausible that human brains are within 2.5 OOM of theoretical maximum efficiency, but that a future powerful ai will just be the size of 1000 human brains or more while having a similar efficiency, at which point efficiency is not saving u, it’s lnly rly helpful in a species whose intelligence is strongly restricted by the size a head can fit as a child or wtv

@Bayesian Not an engineer, dunno about efficiency of wind turbines and photosynthesis though those are interesting questions. I don’t see them matering signiticsntly to the current question. Computer does matter, but ig power of scale seems to matter more? If current brain is the efficiency cap i still think power differential between humans and ai can be very very very large

I'll change my mind about: "The most natural and probable outcome for a civilization like humankind is for it to end within a few millennia with a Singleton, a single entity that has ~complete control over everything."

I think nuclear weapons probably increase the expected time beyond a few millenia. This timeline has clear and drastic anthropic shadow with regards to nuclear weapon use (the way world war ii played out, structure of the cold war, near misses) and I would expect most civilizations to stall technological progress by blowing themselves up repeatedly.

@SaviorofPlant ie technological progress is too hard, and big explosions are too easy, and you can't have sufficient technological progress quickly enough so as to overpower the other agents that have access to big explosions?

If so I think the part I disagree with is the technological progress being too hard. I currently think intelligence enhancements and AI make the world super duper fragile

@Bayesian Current AI is predicated on decades of developments in math, computing and hardware engineering as well as plentiful data from a very open society, and it still took 80 years after the development of nuclear weapons. I think states are irrational enough for nuclear weapon use to be highly probable in that timeframe, and I also expect nuclear weapon use to create worldstates where that type of technological progress is very unlikely.

@SaviorofPlant oh, if AI development is slow enough, and the trajectory is obvious enough, there may come a point where nuclear powers want a nuclear catastrophe over the status quo of technological defeat. It could also be the case that enough states act counter to their own best interests that they just swing nuclear weapons around and that makes everyone explode. I think, though, that overall an international nuclear war is less than 50% likely in the next 50 years? Do you disagree?

@Bayesian There is a stigma on nuclear weapons use as time passes without it occurring, which lowers the probability. Most of the probability mass for a multiparty nuclear war is shortly after development, which for us was ~1945-1965. I think the odds of this occurring are somewhere north of 90% - it seems like a highly atypical case for the technology to be developed, when a war is almost over, by a power which has no intention of using it aggressively to further its own interests besides ending a war it would have won anyway.

That said, I do also disagree with that statement (disregarding AI). I think social media makes it easier for irrational actors to come into power than before, and I expect nuclear weapons use to be more likely in the 2050s than it was in the 1990s.

@SaviorofPlant Ok I reread your original comment and your reply in the comment thread above and is your position that it’ll happen, but like in 20 millenia instead? Ig ‘a few millenia’ depends when u consider civ started, my position isn’t that we’re unusually late or anything, though that might be true, but moreso that from a point like ours singletonness is a very hard-to-avoid natural outcome

@Bayesian Yeah I don't disagree with the point you actually want to make, I just felt like pedantically arguing against the argument as you phrased it.

@SaviorofPlant LOL okay fair enough. In the future I’d encourage you stating that more clearly so I know. Precision is important but as stated in the description i consider ‘changing my mind’ to be more than ‘realize i phrased a thing technically wrong’

@Bayesian Listen, I asked you to create an answer I disagree with, this was the closest thing 😅

If it helps, here are some hot takes you might disagree with:
- I believe Goodhart's Law and outer misalignment are concepts describing the same core phenomena
- I would argue there is a substantial probability of small birds / mammals and large reptiles being moral patients (and a small probability for trillion-scale language models)
- I believe free will is an incoherent, confused way of trying to point at agency, which is plainly false as described
- My p(doom) by the end of 2025 is around 25% (and I expect a significant fraction of surviving worlds to be in an irrecoverable position)

Please create an answer I disagree with lol

@SaviorofPlant LOL, I will try!

I think the setup here is strange. I state my current position, and then if it's at 90% you might think that intuitively implies confident about that position, but instead as currently stated it implies confidence about my mind being changed about it, ie about it being wrong

I could phrase it as "I'll change my mind about X"

open to better alternatives

bought Ṁ10 I'll change my mind ... NO

@Bayesian You could make options for probabilistic beliefs you have, and resolve to your updated probability (in either direction) after people try and change your mind

@SaviorofPlant that's a good idea!