Will an AI get bronze or silver (but not gold) on any International Math Olympiad by end of 2025?
230
1.2kṀ93k
2026
61%
chance

Related market: https://manifold.markets/Austin/will-an-ai-get-gold-on-any-internat

Nov 14, 8:20am: Will an AI get bronze or silver on any International Math Olympiad by 2025? → Will an AI get bronze or silver on any International Math Olympiad by end of 2025?

Get
Ṁ1,000
to start trading!
Sort by:
bought Ṁ1,000 NO

.

bought Ṁ1,000 YES

bytedance seed is also a silver no?

@typeofemale Do you have a good reason to think that they completed in the time limit, rather than after three days as I argued below?

@BoltonBailey I think there is a lot of baseless speculation on this subject

@BoltonBailey I guess I should also say, I saw this comment where the authors were directly asked about the time limit, and one says that within the time limit "many useful lemmas have been proved", which seems to somewhat confirm to me that they are not saying that the entire problems were proved in this time frame.

The "(but not gold)" in the title was added afterwards? Can I get a refund for this?

Also how does it resolves now? If an AI get gold and another get bronze or silver and not gold, it should resolve YES right?

@BAUEREsaie The creator left a comment a few years ago saying that if one AI got gold then this would only resolve YES if another AI got silver or bronze (I guess it's not ideal that this wasn't in the title or criteria until recently, but I also think that these rules are a reasonable interpretation of the original title).

@BoltonBailey Okay this answer my last two questions.
I'm still confused about the change in the title, it feels that before the edit the question would resolve YES if an AI wins silver AND gold, and now that it would resolve NO.

bought Ṁ50 NO

Here's my assessment of where things stand:

  • OpenAI got gold, and so by the creator's comment below, it doesn't count for this market.

  • Google Deepmind also got gold.

  • Harmonic claimed gold in their announcement, and also they technically said they took more than 4.5 hours on some problems, so I think their attempt wouldn't count by the standards of Austin's market anyway.

  • Numina did not solve any problems, per this tweet

  • Bytedance seed prover claims silver, but seems to indicate by their announcement (translation) that they spent 3 days on the problems, which would be over the time limit.

The only thing is that Thomas Zhu made a comment in the Bytedance announcement thread on the lean zulip that Huawei had "claimed 34/42 (backed by a quote from IMO)" but "could only find relevant news in Chinese". I can't find any reference to this myself, seems weird that it would be so low-profile that it doesn't exist in english-language media.

@BoltonBailey I found a source on Chinese media claiming so, https://news.yesky.com/hotnews/12/305512.shtml is one


Just search up something like "华为IMO银奖" on Baidu to find more ads that just talk about that.

They posted their solutions on Github https://github.com/huawei-xiaoyi/IMO2025-solutions


I saw this line in a bunch of news articles:

IMO主席Gregor Dolinar教授对华为AI的表现给予了高度评价:“华为AI提交的答卷获得了34分(满分42分),这是一个了不起的成就。(The Huawei AI scripts received a grade of 34 out of 42, which is a remarkable result.)”

Roughly translated as:

Professor Gregor Dolinar, President of the International Mathematics Organisation (IMO), spoke highly of Huawei AI's performance: "The Huawei AI scripts received a grade of 34 out of 42, which is a remarkable result."

@bohaska Thanks for finding this! I note that (my Google translation of) the article says:

After three days of intense competition, Xiaoyi successfully solved five of the six difficult problems

Which makes me think, similar to seed prover, that this is technically not in the 4.5 hour competition time frame. But I guess we might need the market creator to weigh in.

Oh no this won’t happen idk what past me was thinking

(to clarify: they'll get gold, and nobody will report the shame of only getting silver)

@Bayesian I mean, DeepMind already reported having their AI win silver, with only 1-2 points behind gold. Are you saying no one will do it again?

https://deepmind.google/discover/blog/ai-solves-imo-problems-at-silver-medal-level/

(The reason it didn't count was because their AI used too much time compared to the actual length of an IMO.)

bought Ṁ450 YES

@Bayesian Oh, clever. I do think there will be reasoning models which do get bronze or silver, even if they aren't reported on. Like people will just run o3 on the problems and report how it does; those might be sufficient to resolve.

@AdamK Plausible yeah.

@Forrest Did an AI get bronze or silver on any International Math Olympiad by end of 2025?

This is based on another market, which is based on a bet between Christiano and Yudkowsky, so if either of them mention an opinion on what should count, I'll probably defer to it.

As far as I'm aware, there are two issues which might be considered to disqualify the recent attempt: the use of manual translations to Lean, and the fact that it exceeded the time limit by roughly a factor of 15.

It is my current opinion that the 'translations' issue does not disqualify the attempt for the purposes of this market, but that the 'time limit' issue does, and thus that the criteria for YES have not yet been met.

However, I have not looked into the topic in depth yet, and cannot promise that I won't change my mind as a result of more research (nor will I promise to do said research in a timely manner - sorry)

sold Ṁ937 YES

Would be a good idea to have all of that in the description, seeing as this question as of today resolves YES. Thank you.

If there's been news today specifically, what I said above may be out of date.

I feel that having this in the comments is probably good enough, but will admit that if I were a better market creator I'd've given an opinion sooner.

there's not been news today, the official is just doing word gymnastics

I really don't think this qualifies. The "translations" to Lean do some pretty substantial work on behalf of the model. For example, in the theorem for problem 6, the Lean translation that the model is asked to prove includes an answer that was not given in the original IMO problem.

theorem imo_2024_p6 (IsAquaesulian : (ℚ → ℚ) → Prop) (IsAquaesulian_def : ∀ f, IsAquaesulian f ↔ ∀ x y, f (x + f y) = f x + y ∨ f (f x + y) = x + f y) : IsLeast {(c : ℤ) | ∀ f, IsAquaesulian f → {(f r + f (-r)) | (r : ℚ)}.Finite ∧ {(f r + f (-r)) | (r : ℚ)}.ncard ≤ c} 2

The model is supposed to prove that "there exists an integer c such that for any aquaesulian function f there are at most c different rational numbers of the form f(r)+f(−r) for some rational number r, and find the smallest possible value of c".

The original IMO problem does not include that the smallest possible value of c is 2, but the theorem that AlphaProof was given to solve has the number 2 right there in the theorem statement. Part of the problem is to figure out what 2 is.

Link: https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/imo-2024-solutions/P6/index.html

Thanks for pointing this out. I'm surprised that that I haven't seen anyone else say this. I agree with you that the question should not be resolved YES (yet).

They said they bruteforced the answer for the algebric questions, and then let the model prove it. So this does count. It's just multiple different components.

C.f. Gowers comment here, my understanding is that the formalizer network arrived at this additional information, and it was not provided by humans? https://x.com/wtgowers/status/1816840103349662114

This makes no sense to me. It basically works because the answers are guessable? Still highly skeptical. At the very least this limitation seems to preclude anything that doesn’t involve unconscionably vast compute resources, and solving any problem that does not have a cutesy toy answer that can be naively guessed (All even numbers/c=2, etc.). Certainly this should make others substantially more skeptical that this performance wasn’t just a fluke of the competition having so many answers of this type.

AFAICT It makes many guesses and disproves some proves others. It's a mixture of deduction and abduction, some version of this is done by people as well.

Actually I guess it was auto formalization at train time and human formalization at test time?

Seems like a rather critical detail.

The autoformalization step seems largely trivial. I’m guessing it has a small chance to be slightly off, and would disqualify the whole question. Since the IMO is a once per year attempt, they likely just didn’t want to take any chances.

© Manifold Markets, Inc.TermsPrivacy