Benchmark Gap #4: Once a single AI model solves >= 95% of miniF2F, MATH, and MMLU STEM, how many months will it be before an AI is listed as a (co) first author on a published math paper?

This question is meant to measure the gap between solving the main math-based benchmarks at the time of market creation, and contributing to real world mathematics.

The co first author requirement is loose: I will also accept an AI being credited with significant contributions to both deciding what to prove and the actual proof (merely contributing to the proof is not enough - I am trying to get at "the AI does the work of a mathematician" not "the AI does the work of a proof assistant"). I would also accept, for instance, the human author of the paper expressing that they would have named the AI as a coauthor if it was human, or saying that the result could not have been obtained without the assistance of the AI.

Get Ṁ600 play money
Sort by:

In a lot of pure math, author order is arbitrary/alphabetical. Removing that, I second that it'll be 0. Maybe negative.

I think it is plausible that it will be <0

bought Ṁ50 of LOWER

People already list ChatGPT as a coauthor in scientific papers but not in math yet.

More related questions