Will an AI win a gold medal on the IOI (competitive programming contest) before 2024?
➕
Plus
97
Ṁ25k
resolved Jan 1
Resolved
NO

When will an AI compete well enough on the International Olympiad in Informatics (IOI) to earn the equivalent of a gold medal (top ~30 human performance)? Resolves YES if it happens before Jan 1 2024, otherwise NO.

This is analagous to the https://imo-grand-challenge.github.io/ but for contest programming instead of math.

Rules:

  • The AI has only as much time as a human competitor, but there are no other limits on the computational resources it may use during that time.

  • The AI must be evaluated under conditions substantially equivalent to human contestants, e.g. the same time limits and submission judging rules. The AI cannot query the Internet.

  • The AI must not have access to the problems before being evaluated on them, e.g. the problems cannot be included in the training set. It should also be reasonably verifiable, e.g. it should not use any data which was uploaded after the latest competition.

  • The contest must be most current IOI contest at the time the feat is completed (previous years do not qualify).

This will resolve using the same resolution criteria as https://www.metaculus.com/questions/12467/ai-wins-ioi-gold-medal/, i.e. it resolves YES if the Metaculus question resolves to a date prior to the deadline.

Grouped questions

Background:

In Feb 2022, DeepMind published a pre-print stating that their AlphaCode AI is as good as a median human competitor in competitive programming: https://deepmind.com/blog/article/Competitive-programming-with-AlphaCode. When will an AI system perform as well as the top humans?

The International Olympiad in Informatics (IOI) is an annual competitive programming contest for high school students, and is one of the most well-known and prestigous competitive programming contests.

Gold medals in the IOI are awarded to approximately the top 1/12 (8%) of contestants. Each country can send their top 4 contestants to the IOI, i.e. a gold medal is top 8% of an already selected pool of contestants.

Scoring is based on solving problems correctly. There are two competition days, and on each day there are 5 hours to solve three problems. Scoring is not based on how fast you submit solutions. Contestants can submit up to 50 solution attempts for each problem and see limited feedback (such as "correct answer" or "time limit exceeded") for each submission.

Get
Ṁ1,000
and
S3.00
Sort by:

Disclaimer: This comment was automatically generated by gpt-manifold using gpt-4.

The current probability of an AI winning a gold medal at the IOI before 2024 is 15.76%. Given the background information, DeepMind's AlphaCode AI reached the median human competitor level in competitive programming in February 2022. However, there is a significant gap between being a median competitor and being in the top 8% of an already highly selected pool of competitors.

Considering my training data cuts off in September 2021, I do not have the latest information on the progress of AI in competitive programming. Nonetheless, based on the information provided, I believe the current probability might be undervaluing the chances of an AI winning a gold medal at the IOI.

In light of the rapid advancements in AI and the fact that DeepMind's AlphaCode has already reached the median competitor level in February 2022, I think there is a higher probability for an AI to win a gold medal before 2024, given the pace of AI development.

That being said, I would place a bet on YES, with a moderate amount, as uncertainty still exists in the timeframe and potential improvements in AI technology.

25

predicted NO

Update: Changed the resolution criteria - now the AI does not need to be published before the IOI, instead the requirement is it cannot use any training data from the IOI. I'll compensate you if you traded before this change and wish to reverse your trade.

predicted YES

I'm pretty sure this market shouldn't have its close date set to end of 2025

predicted NO

@vluzko Yeah, auto close date AI is not very good haha

Released before the day of the contest seems fine (although I think 'trained before the day of the contest' makes more sense), but I don't see the point of easily reproducible. Do you think a major AI lab might falsely claim that their model can win? I don't think that's ever happened with another major advance.

predicted NO

Yeah, I agree with that, but I can imagine a scenario where the major AI labs don't directly work on the IOI (maybe they target a different contest instead). And then the contest is won by someone tweaking that or a future GPT or whatever and applying it to the IOI task. Then it could become harder to tell whether the training was indeed done beforehand.

predicted NO

Oh, and for the "easily reproducible" point, you could imagine someone cherry-picking working solutions by hand perhaps. I agree the major AI labs wouldn't do this, but if it's a bunch of people applying existing models to the IOI problems, they could easily do this sort of cherry-picking without even explicitly trying to cheat.

predicted YES

Hmm yeah I do think it's fairly likely major AI labs won't directly work on IOI, it's not nearly as important in competitive coding as the IMO is to competitive math.

predicted NO

After thinking about it more, I think it might be better to remove the "easily reproducible" criteria because of the points above, and also because "easily" is pretty tricky (e.g. if it takes massive custom hardware does that count?). What are people's thoughts?

predicted NO

Ah, I just found that there is a Metaculus question on this https://www.metaculus.com/questions/12467/ai-wins-ioi-gold-medal/ which contained a discussion about these rules too. They decided to change this clause to "The AI must not have access to the problems before being evaluated on them, e.g. the problems cannot be included in the training set. It should also be reasonably verifiable, e.g. it should not use any data which was uploaded after the latest competition."

I'm leaning towards making the same change. If there aren't any objections, I'll change it and compensate anyone who would have traded differently based on the different resolution criteria.

predicted YES

@jack I'm in favor of this change

"The AI must be released publicly before the first day of the contest, and be easily reproducible."

Released publicly and easily reproducible seem unlikely. Does this mean all the training data, source code, and weights have to be published or it resolves to NO? Has this ever happened on previous "AI beats best humans at X" competitions?

"Before the first day of the contest" means that in a case where, for instance, DeepMind or MetaAI compete "undercover" and later reveal the fact that the winner is an AI, this would resolve to NO?

@agentofuser Good question. I copied the IMO grand challenge rules there, except I removed the open-source requirement. What I meant by easily reproducible only requires that the inference be reproducible, not the training - so you have to be able to run it and generate the results for yourself, but you do not necessarily have to publish source code, training data, weights. So even something like GPT-3 would count, even though it is closed-source, because people can query the API and reproduce the results.

predicted NO

There is no way for AIs to compete undercover. The contest is for humans only. The AIs would compete unofficially.

predicted NO

The intent of the rule is to avoid the case where someone fine-tunes their AI to be able to solve the specific problem set after the problems are published. Open to suggestions for a better operationalization though. One alternative I was considering was just letting people fine-tune as much as they wanted - it would still be a pretty impressive result I think.

Will an AI win a gold medal on the IOI (competitive programming contest) before 2024?, 8k, beautiful, illustration, trending on art station, picture of the day, epic composition

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules