Will humans be responsible for the solutions to any remaining Millenium Prize problem?
This question is aiming for the scenario where most future advanced math is done by AI. If even 25% of the "effort" of solving the problem is made by human mathematicians I will resolve YES.
If deep learning ends up largely superseded by some future paradigm I will update the question to reflect that. The question is about the current cutting edge of AI, not deep learning specifically.
To my knowledge essentially no existing theorems wouldn't pass the bar of "mostly solved by humans".
Example: if this question was about the four color theorem it would have resolved YES.
A human working with a variety of AI assistants (e.g. a very good arxiv searcher, a proof assistant that can prove undergraduate theorems but not most graduate-level theorems on its own) still resolves YES
More generally: if there's AI involved and it couldn't get near-perfect scores on all math competitions, ace any graduate level math exam, prove most theorems in published textbooks given the setup, etc. the question resolves YES. (It may resolve YES even for AI that can do all of that depending on the scenario, but you can be confident that anything short of that will still count as "mostly human")
If almost all of the work is done by humans and then the last few steps are done by a weak AI for PR reasons or something like that question still resolves YES.
Question resolves N/A at market close if no human has solved a Millenium Prize problem and there are still some left.
Question resolves N/A at market close if no human has solved a Millenium Prize problem and there are still some left.
This condition is rather annoying. Why not just have the market run indefinitely? Doing it like this makes the risk for the NO-side artificially small, even though the probability that this question will actually resolve NO is near zero.
You think that the N/A clause diminishes the risk of the NO side? I think it's the opposite: at the time it N/As, if all the problems are still open, then it's basically a wash, but if any problems are solved (yet this market is still unresolved), that means that humans solved 0 and AI solved non-0. That would be a strong update towards the belief that AI and not humans will eventually solve the others too, so in that world I'd be relieved that my YES shares are getting N/A'd.
@BenjaminCosman There are basically eight possible scenarios, which are
No problems solved (other than Poincare, of course) by 2040, market resolves N/A
Some problem is solved by humans later, market ought to have resolved YES
All problems resolved by AI later, market ought to have resolved NO
At least one problem remains unsolved forever, market never resolves
Some problem solved by humans by 2040, market resolves YES
No problems solved by humans, some but not all by AI by 2040, market resolves N/A
As 1.1
As 1.2
As 1.3
All problems solved by AI by 2040, market resolves NO
The most likely scenario by quite a bit is 1.1. The fact that the market resolves N/A in this case rather than YES is significantly in the NO-side's favour. The fact that 3.2 resolves N/A rather than NO does work in the YES-side's favour, but only ever so slightly, as this scenario is vanishingly unlikely to begin with.
For the record, my assessment of the relative likelihood of these scenarios is
1.1 > 2 >>> 3.1 >> 1.2, 3.2, 4 > 1.3, 3.3
@Lorxus One problem is that maybe doom will be sooner than math applications but in general, yes, I think this is mispriced.
@BoltonBailey Not particularly - I'm just aware that he's been working his way towards a solution for the last few years and from what I can understand - I'm not in analysis, I'm in geometry - his program seems a promising one. If I had to give my genuine best guess I think he'll crack it easily within the decade.
@vluzko Humans that have modified themselves in some way to be smarter, think faster, have a better memory, etc.
@IsaacKing Will resolve YES if none of the augments involve brain-computer interfaces running a powerful mathematical AI, will probably resolve N/A if there's BCIs with significant mathematical capabilities.
Such a weird take.
Everyone deifies “AI” as their new god or “first mover” when not a single step of the computer or deep learning revolutions was not the work of (sometimes small, sometimes enormous) groups or people.
AlphaFold was not the product of “AI”; it was clever engineering.
Gpt wasn’t AI, or even clever ML, it was simply infrastructure engineering
Even stable diffusion was simply a more elegant efficient architecture.
—
If someone sets up a theorem prover model. That was 100% human effort and will be “solved by humans”
The “five years ago” “AI wrote this song” craze is over. Whatever emergent properties these systems have, they are in some sense “100% human effort” to design and build.
Look for your god (or Satan) elsewhere rather than imputing agency to human-created things, that are a long way from taking on any.
@Gigacasting Forgetting about philosophical questions about 'agency', it's reasonable to think about what inputs go into solving some problem. The problem "get me across town" used to be solved by humans with their feet, and now it might be solved by humans getting into a (human-built) self-driving car and pushing one button. Sure at some deep enough level it's human work either way, but it corresponds to a very real difference in the world and in the experience of solving that problem, so it seems like a reasonable thing to ask about. This particular question may be more subjective than the walking-vs-self-driving-car example, but it's the same idea - will the experience of doing complicated math problems require lots of new human effort for each problem, or will it require a one-time human effort of developing AlphaMath, after which for each new problem the only effort will be typing the problem into AlphaMath?
@Gigacasting That seems a weird philosophical take.
So you are saying that if we had AGI and it could do everything humans do, and it was goal oriented in the sense that it's useful to understand it as taking actions based on whether it thinks they are trying to achieve a goal, it would not count as agentic because humans made it? .
Or even worse, if we made an em that was functionally identical to a human would it not have agency because we made it?
Humams were 100% made by evolution too, and children's are "made" by their parents but that doesn't mean we don't have agency.
Or is your point that we are not going to get such AGI in a long time, and current things like RL agents that look agentic really aren't, even if building agentic AI is posible.
Because that seems weird to square whith you expecting more than 70% AGI by 2040 for the fxy future fund price.
I mean I guess their definition doesn't talk about agentiness, only about being able to do all jobs.
But still like do you really expect that agentiness is so intrinsically biological or hard to figure out that we won't be able to figure it out by the time we figure out everything else humans do.?