Resolves YES if before 2024, an AI solves at least as many points worth of problems as the best human competitor on any single contest of the following competitive programming contests: IOI, ICPC, or CodeForces. Otherwise NO. (In particular, this ignores scoring points based on quickly a problem is solved, so that the AI can't win just by submitting solutions inhumanly fast.)
This is similar to the https://imo-grand-challenge.github.io/ but for contest programming instead of math, and with a requirement to rank first, not just get a gold medal (typically top 5-10%).
Detailed rules:
See detailed rules on this market.
Related questions
Background:
In Feb 2022, DeepMind published a pre-print stating that their AlphaCode AI is as good as a median human competitor in competitive programming: https://deepmind.com/blog/article/Competitive-programming-with-AlphaCode. When will an AI system perform as well as the top humans?
I updated the resolution criteria and scoring definitions as per the discussion on the 2024 market https://manifold.markets/jack/will-an-ai-outcompete-the-best-huma.
Please let me know if there are any feedback or objections. I think this doesn't change the probability of the market in any significant way, just makes the definitions better.