Each option resolves YES if, within a year of artificial superintelligence (ASI) being created (and not killing us all), it is able to create computer hardware that can run the ASI itself and theoretically be able to survive these temperatures being continuously applied to the entire external surface for an arbitrary duration. Survival in this case means the resulting computer can resume running afterwards without outside intervention, at a level that still qualifies as an ASI, even if performance is degraded relative to before. The hardware can include any armour/insulation layers needed, not just the computronium itself.
Other potentially relevant temperatures for context:
0 C = 273 K (melting point of water ice)
100 C = 373 K (boiling point of water)
121 C = 394 K (high temperature limit of extremophile archaea microbes - Extremophile - Wikipedia)
300 C = 573 K (high temperature limit of current typical computer memory chips - www.newscientist.com)
3687 K (melting point of tungsten, highest presently known metal - Tungsten - Element information, properties and uses | Periodic Table (rsc.org))
4098 K (sublimation point of carbon e.g. diamond - Carbon - Element information, properties and uses | Periodic Table (rsc.org))
4400 K (melting point of hafnium carbonitride, highest presently known substance - Melting point - Wikipedia)
5400-5700 K (Earth's core - Full article: Temperature and composition of the Earth's core (tandfonline.com))
Sources for answer option temperatures:
Melting point of silicon - Silicon - Element information, properties and uses | Periodic Table (rsc.org)
Solar photosphere and core - The Sun's Vital Statistics (stanford.edu)
Lightning strike - Understanding Lightning: Thunder (weather.gov)
Thermonuclear bomb - Introduction to Nuclear Weapon Physics and Design (nuclearweaponarchive.org)
Supernova - Supernova - Wikipedia, https://doi.org/10.1016%2Fj.physrep.2007.02.002
Planck temperature - CODATA Value: Planck temperature (nist.gov)
@ashly_webb just a heads up the odds of 5800K should be between the odds of 1687K and 28,000K
@pyrylium I mean current humanity would definitely have seemed like literal deities unbound by the laws of physics to pre-agricultural humanity
@TheAllMemeingEye sure -- it's arguable whether pre-agricultural humanity c. 10,000 BCE had a firm grasp of causality, let alone physics or physical laws. scientific epistemology as we know it dates back to a couple thousand BCE, at best -- by any reasonable chronology we are presently (2024 CE) closer to the development of ancient Egyptian mathematics (~3000 BCE) than those same ancient Egyptians were to the development of agriculture.
I love sci-fi as much as anyone, but for once I would like to read a sober and grounded analysis of the parameter space and boundary conditions of an ASI that doesn't sound like a conversation with my stoner college roommate.
@pyrylium I am not an expert, but I get the approximate layman idea from reading those who seem to claim to be (e.g. Eliezer Yudkowsky, Scott Alexander etc.) that the range of power magnitudes we should be considering is vastly wider than that most non-experts normally consider
@pyrylium Part of the reason I created this market is because I wanted to figure out whether, even if the ASI was kept in a single computer terminal, and we all figuratively covered our eyes and ears so the ASI couldn't trick us into letting it win, then proceeded to nuke it, would it still somehow be smart enough that it would be able to survive and defeat us. Given the nuke option is currently at 9% it seems it probably couldn't, kinda like how a solitary naked unarmed untrained Homo Sapiens couldn't defeat a troop of angry Chimpanzees, it looks like one needs a starting point of power to be able to apply the leverage of intelligence
@TheAllMemeingEye I don't think Scott consider himself an expert on the subject.
I think ASI will be very impressive, but I don't think it will "break" any law of physics, these temperatures are very extremes and nothing could stay in a predefined state in them.
Imo the ASI will have better technology, but not almost impossible technology, However it will also completely outsmart us, it will have better strategies in the domains we know of, and strategies on domains we didn't consider at all.
@dionisos Fair enough, that is a reasonable take, part of why I included the first option was to calibrate the other options to a temperature that definitely wouldn't involve breaking laws of physics but is still beyond our current tech
@TheAllMemeingEye Scott Alexander is a clinical psychologist. Eliezer Yudkowsky is, generously, an autodidactic AI researcher. both raise interesting philosophical points about AI alignment, self-improvement, and existential risk. neither are credible experts on the topic of "could an AI survive an ongoing nuclear blast", because neither of them are physical scientists and so any arguments they advance on some level collapse to something like "an ASI will prove we are in a simulation and gain infinite power" or "an ASI will solve physics and identify some loophole to gain infinite power" -- both of which are fun sci-fi short stories (the first time you read one) but neither of which are falsifiable. it's one step above "an ASI will invent magic".
looking at the responses to this question, it seems Manifold assesses a 22% chance that (within a year of existence!) an ASI will be able to survive sustained temperatures of >28,000 K. to actually contextualize this number, this is ~5× the boiling point of tungsten. at these temperatures, there is basically no such thing as condensed matter as we know it.
is it physically possible that one could devise a computing architecture that operates in a plasma or a quasistellar gravitational field? maybe! (again -- this is just sci-fi.) but is there any reason for us to expect that an ASI would (1) develop the physical principle for such computation (2) acquire the matter and energy necessary and (3) assemble such a device?
at some point these seemingly "deep" questions devolve into childish schoolyard arguments. is there any rational, positivistic reason for us to think that an AI could withstand the Big Bang, besides that some people aren't convinced it couldn't?
see also: knightian uncertainty https://en.wikipedia.org/wiki/Knightian_uncertainty?wprov=sfla1
@pyrylium
> it seems Manifold assesses a 22% chance that
Just give the market some time, it was just created.
@pyrylium
> is there any rational, positivistic reason for us to think that an AI could withstand the Big Bang, besides that some people aren't convinced it couldn't?
I think no, there isn't, but I also think that nobody believe it is the case.
@Snarflak thank you for sharing. even if you respect Yudkowsky's work as an AI researcher (questionable!) he ultimately succumbs to ultracrepidarianism and his expertise transmutes flawlessly into Dunning-Kruger quackery.
even if you respect Yudkowsky's work as an AI researcher (questionable!)
Well, he predicted that his organization would develop AGI before 2020, "probably around 2008 or 2010", so maybe not the best at that, either…
I mean current humanity would definitely have seemed like literal deities unbound by the laws of physics to pre-agricultural humanity
I think they just see us as humans with different technology, who are still killable.