Will *any* remaining Millenium Prize problem be solved entirely or mostly by humans?
77
619
1.3k
2040
72%
chance

Will humans be responsible for the solutions to any remaining Millenium Prize problem?

This question is aiming for the scenario where most future advanced math is done by AI. If even 25% of the "effort" of solving the problem is made by human mathematicians I will resolve YES.

  • If deep learning ends up largely superseded by some future paradigm I will update the question to reflect that. The question is about the current cutting edge of AI, not deep learning specifically.

  • To my knowledge essentially no existing theorems wouldn't pass the bar of "mostly solved by humans".

  • Example: if this question was about the four color theorem it would have resolved YES.

  • A human working with a variety of AI assistants (e.g. a very good arxiv searcher, a proof assistant that can prove undergraduate theorems but not most graduate-level theorems on its own) still resolves YES

  • More generally: if there's AI involved and it couldn't get near-perfect scores on all math competitions, ace any graduate level math exam, prove most theorems in published textbooks given the setup, etc. the question resolves YES. (It may resolve YES even for AI that can do all of that depending on the scenario, but you can be confident that anything short of that will still count as "mostly human")

  • If almost all of the work is done by humans and then the last few steps are done by a weak AI for PR reasons or something like that question still resolves YES.

  • Question resolves N/A at market close if no human has solved a Millenium Prize problem and there are still some left.

Get Ṁ600 play money
Sort by:
bought Ṁ112 of YES

Terry Tao is probably nearing a solution of Navier-Stokes, for at least one.

predicts NO

@Lorxus perhaps but ai is also very near.

predicts YES

@VaclavRozhon Surely you should buy lots of NO, then!

predicts NO

@Lorxus One problem is that maybe doom will be sooner than math applications but in general, yes, I think this is mispriced.

predicts NO

@Lorxus Any recent news that makes you think Tao is nearing a solution?

predicts YES

@BoltonBailey Not particularly - I'm just aware that he's been working his way towards a solution for the last few years and from what I can understand - I'm not in analysis, I'm in geometry - his program seems a promising one. If I had to give my genuine best guess I think he'll crack it easily within the decade.

What if AI is perfectly capable of solving the problem, but humans do it themselves for fun?

How will you count augmented humans?

@IsaacKing In the first case: question resolves YES

Augmented in what sense?

bought Ṁ50 of YES

@vluzko Humans that have modified themselves in some way to be smarter, think faster, have a better memory, etc.

@IsaacKing Will resolve YES if none of the augments involve brain-computer interfaces running a powerful mathematical AI, will probably resolve N/A if there's BCIs with significant mathematical capabilities.

(will most of the human effort in solving the next millennium problem be that of machine learning researchers, rather than pure mathematics?)

Coherent and avoids the Kurzweil and AI-doomer-who-shall-not-be-named fallacy

Such a weird take.

Everyone deifies “AI” as their new god or “first mover” when not a single step of the computer or deep learning revolutions was not the work of (sometimes small, sometimes enormous) groups or people.

AlphaFold was not the product of “AI”; it was clever engineering.

Gpt wasn’t AI, or even clever ML, it was simply infrastructure engineering

Even stable diffusion was simply a more elegant efficient architecture.

If someone sets up a theorem prover model. That was 100% human effort and will be “solved by humans”

The “five years ago” “AI wrote this song” craze is over. Whatever emergent properties these systems have, they are in some sense “100% human effort” to design and build.

Look for your god (or Satan) elsewhere rather than imputing agency to human-created things, that are a long way from taking on any.

predicts YES

@Gigacasting Forgetting about philosophical questions about 'agency', it's reasonable to think about what inputs go into solving some problem. The problem "get me across town" used to be solved by humans with their feet, and now it might be solved by humans getting into a (human-built) self-driving car and pushing one button. Sure at some deep enough level it's human work either way, but it corresponds to a very real difference in the world and in the experience of solving that problem, so it seems like a reasonable thing to ask about. This particular question may be more subjective than the walking-vs-self-driving-car example, but it's the same idea - will the experience of doing complicated math problems require lots of new human effort for each problem, or will it require a one-time human effort of developing AlphaMath, after which for each new problem the only effort will be typing the problem into AlphaMath?

bought Ṁ10 of NO

@Gigacasting That seems a weird philosophical take.

So you are saying that if we had AGI and it could do everything humans do, and it was goal oriented in the sense that it's useful to understand it as taking actions based on whether it thinks they are trying to achieve a goal, it would not count as agentic because humans made it? .

Or even worse, if we made an em that was functionally identical to a human would it not have agency because we made it?

Humams were 100% made by evolution too, and children's are "made" by their parents but that doesn't mean we don't have agency.

Or is your point that we are not going to get such AGI in a long time, and current things like RL agents that look agentic really aren't, even if building agentic AI is posible.

Because that seems weird to square whith you expecting more than 70% AGI by 2040 for the fxy future fund price.

I mean I guess their definition doesn't talk about agentiness, only about being able to do all jobs.

But still like do you really expect that agentiness is so intrinsically biological or hard to figure out that we won't be able to figure it out by the time we figure out everything else humans do.?

bought Ṁ75 of YES

I think the chance that the NO-condition is met is vanishingly small, and moreover the chance that the NO-condition can happen in a world which is otherwise normal enough for a NO-resolution to happen and for me to be alive and still caring about Manifold is even smaller.

Will *any* remaining Millenium Prize problem be solved entirely or mostly by humans?, 8k, beautiful, illustration, trending on art station, picture of the day, epic composition

More related questions