How well will OpenAI's o1 (not o1-preview) do on the ARC prize when it's released if tested?
💎
Premium
8
Ṁ1989
Jan 1
34.52
expected

The creators of the ARC prize already tested OpenAI's new o1-preview and o1-mini models on the prize. The non-preview version of o1 performed substantially better (see below) on OpenAI's math benchmarks and will seemingly be released before EOY. Assuming it's tested on the ARC prize, how well will the full version of o1 perform?

Note 1: I usually don't participate in my own markets, but in this case I am participating since the resolution criteria are especially clear.

Note 2: The ideal case is if the ARC prize tests o1 in the same conditions. If they don't, I'll try to make a fair call on whether unofficial testing matches the conditions closely enough to count. If there's uncertainty, I'll err on the side of resolving N/A.

Get
Ṁ1,000
and
S3.00
Sort by:

The preview got like 25, IIRC?

@MartinVlach 21% actually, 12% more than 4o

Which set? Public eval or semi private?

@Usaar33

> OpenAI o1-preview and o1-mini both outperform GPT-4o on the ARC-AGI public evaluation dataset. o1-preview is about on par with Anthropic's Claude 3.5 Sonnet in terms of accuracy but takes about 10X longer to achieve similar results to Sonnet.

Public, same as the evaluation I linked.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules