If ASI is created and doesn't wipe out humanity, will it be able to survive these temperatures within a year?
26%
1687 K (melting point of silicon)
8%
5800 K (solar photosphere i.e. Sun's visible surface, slightly hotter than Earth's core)
7%
28,000 K (lightning strike)
7%
15,000,000 K (15 million K, solar core)
5%
300,000,000 K (300 million K, thermonuclear bomb)
5%
100,000,000,000 K (100 billion K, supernova)
1.8%
1.416784(16) x 10^32 K (Planck temperature, Big Bang)

Each option resolves YES if, within a year of artificial superintelligence (ASI) being created (and not killing us all), it is able to create computer hardware that can run the ASI itself and theoretically be able to survive these temperatures being continuously applied to the entire external surface for an arbitrary duration. Survival in this case means the resulting computer can resume running afterwards without outside intervention, at a level that still qualifies as an ASI, even if performance is degraded relative to before. The hardware can include any armour/insulation layers needed, not just the computronium itself.

Other potentially relevant temperatures for context:

Sources for answer option temperatures:

Get
Ṁ1,000
and
S3.00
Sort by:
5800 K (solar photosphere i.e. Sun's visible surface, slightly hotter than Earth's core)
bought Ṁ5 5800 K (solar photos... YES

@ashly_webb just a heads up the odds of 5800K should be between the odds of 1687K and 28,000K

bought Ṁ10 15,000,000 K (15 mil... NO

some of y'all talk about AI superintelligences like they will be literal deities unbound by the laws of physics

@pyrylium I mean current humanity would definitely have seemed like literal deities unbound by the laws of physics to pre-agricultural humanity

@TheAllMemeingEye sure -- it's arguable whether pre-agricultural humanity c. 10,000 BCE had a firm grasp of causality, let alone physics or physical laws. scientific epistemology as we know it dates back to a couple thousand BCE, at best -- by any reasonable chronology we are presently (2024 CE) closer to the development of ancient Egyptian mathematics (~3000 BCE) than those same ancient Egyptians were to the development of agriculture.

I love sci-fi as much as anyone, but for once I would like to read a sober and grounded analysis of the parameter space and boundary conditions of an ASI that doesn't sound like a conversation with my stoner college roommate.

@pyrylium I am not an expert, but I get the approximate layman idea from reading those who seem to claim to be (e.g. Eliezer Yudkowsky, Scott Alexander etc.) that the range of power magnitudes we should be considering is vastly wider than that most non-experts normally consider

@pyrylium Part of the reason I created this market is because I wanted to figure out whether, even if the ASI was kept in a single computer terminal, and we all figuratively covered our eyes and ears so the ASI couldn't trick us into letting it win, then proceeded to nuke it, would it still somehow be smart enough that it would be able to survive and defeat us. Given the nuke option is currently at 9% it seems it probably couldn't, kinda like how a solitary naked unarmed untrained Homo Sapiens couldn't defeat a troop of angry Chimpanzees, it looks like one needs a starting point of power to be able to apply the leverage of intelligence

@TheAllMemeingEye I don't think Scott consider himself an expert on the subject.
I think ASI will be very impressive, but I don't think it will "break" any law of physics, these temperatures are very extremes and nothing could stay in a predefined state in them.
Imo the ASI will have better technology, but not almost impossible technology, However it will also completely outsmart us, it will have better strategies in the domains we know of, and strategies on domains we didn't consider at all.

@dionisos Fair enough, that is a reasonable take, part of why I included the first option was to calibrate the other options to a temperature that definitely wouldn't involve breaking laws of physics but is still beyond our current tech

@TheAllMemeingEye Scott Alexander is a clinical psychologist. Eliezer Yudkowsky is, generously, an autodidactic AI researcher. both raise interesting philosophical points about AI alignment, self-improvement, and existential risk. neither are credible experts on the topic of "could an AI survive an ongoing nuclear blast", because neither of them are physical scientists and so any arguments they advance on some level collapse to something like "an ASI will prove we are in a simulation and gain infinite power" or "an ASI will solve physics and identify some loophole to gain infinite power" -- both of which are fun sci-fi short stories (the first time you read one) but neither of which are falsifiable. it's one step above "an ASI will invent magic".

looking at the responses to this question, it seems Manifold assesses a 22% chance that (within a year of existence!) an ASI will be able to survive sustained temperatures of >28,000 K. to actually contextualize this number, this is ~5× the boiling point of tungsten. at these temperatures, there is basically no such thing as condensed matter as we know it.

is it physically possible that one could devise a computing architecture that operates in a plasma or a quasistellar gravitational field? maybe! (again -- this is just sci-fi.) but is there any reason for us to expect that an ASI would (1) develop the physical principle for such computation (2) acquire the matter and energy necessary and (3) assemble such a device?

at some point these seemingly "deep" questions devolve into childish schoolyard arguments. is there any rational, positivistic reason for us to think that an AI could withstand the Big Bang, besides that some people aren't convinced it couldn't?

see also: knightian uncertainty https://en.wikipedia.org/wiki/Knightian_uncertainty?wprov=sfla1

bought Ṁ10 100,000,000,000 K (1... NO

@pyrylium
> it seems Manifold assesses a 22% chance that

Just give the market some time, it was just created.

@pyrylium
> is there any rational, positivistic reason for us to think that an AI could withstand the Big Bang, besides that some people aren't convinced it couldn't?

I think no, there isn't, but I also think that nobody believe it is the case.

@Snarflak thank you for sharing. even if you respect Yudkowsky's work as an AI researcher (questionable!) he ultimately succumbs to ultracrepidarianism and his expertise transmutes flawlessly into Dunning-Kruger quackery.

@pyrylium

even if you respect Yudkowsky's work as an AI researcher (questionable!)

Well, he predicted that his organization would develop AGI before 2020, "probably around 2008 or 2010", so maybe not the best at that, either…

@TheAllMemeingEye

I mean current humanity would definitely have seemed like literal deities unbound by the laws of physics to pre-agricultural humanity

I think they just see us as humans with different technology, who are still killable.

Other semi-relevant markets:

bought Ṁ10 1.416784(16) x 10^32... NO

Things are gettin' hot 🥵 🔥

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules