Will any model pass an "undergrad proofs exam" Turing test by 2027?
20
1kṀ987
2027
79%
chance
The model receives each question as text (or text + images), outputs an answer as text + images, and is graded as part of a pool with human students who also took the test. "Pass" means >=70% Has to be a proofs-based exam, e.g. abstract algebra, topology, linear algebra if it's proofs heavy. There are probably undergrad math exams *somewhere* that are very easy, so I will be exercising my judgment on whether the exam "counts". Unfortunately I do not have examples to hand of what I consider reasonable, but something like "would be a medium difficulty 200-level proofs exam at a top-tier university".
Get
Ṁ1,000
to start trading!
Sort by:

Have you tested current models on this? All these proofs are extremely standard and are all over the internet, meaning models were trained on them. I'd honestly be surprised if o3 or Gemini Thinking couldn't get 70% on a standard such test.

@pietrokc That's probably true. It is not really the intent of the question (I made this market before LLMs had really taken off and did not think of the training data issue), but I don't think it can be avoided. Some time in the vaguely near future I'll try to find a proofs exam and test this.

Ultra-likely. Topology was almost trivial, especially comparable to IMO questions. I recall “prove at least 3 of 12 theorems” being the final exam, pretty sure an AI could blow past that even today. The handwriting and telling the AI not to be too giga-brained (make a few mistakes and don’t solve them all) would be harder than the solving.
(Not betting due to Gigacasting’s First Law of AI Optics: any research lab smart enough to pass a Turing test will be smart enough to know never to run one)
© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules