Many markets on capabilities are being resolved via Chatbot Arena, but I think that Chatbot arena scores might not be a very good measurement. See https://manifold.markets/DanielKokotajlo/gpt4-or-better-model-available-for#Cd09AsavvH6UVdLod4N6 for some discussion.
Another reason why Chatbot Arena could fail is that as models get more powerful, chat use cases are less representative. Minimally, a high fraction of chat use cases right now are very easy and thus saturate on performance.
Note that this market just refers to whether I think that the top few (e.g. top 10) Chatbot Arena scores reflect the actual capabilities reasonably well, not whether Chatbot Arena can't be gamed. So if (e.g.) the best chatbots don't game Chatbot arena (even if they could), then the scores could be sufficiently representative.
I'm open to comments trying to convince me either way, but I don't promise to keep up with this market.
@jacksonpolack Yes, I think current scores track reasonably well. But not amazingly well. So I would resolve to yes if this was the current end date.