What will be true of OpenAI's next major LLM release (GPT-4.5 or GPT-5)?
114
933
4.1K
Apr 30
92%
released in 2024
91%
achieve the highest ELO rating on LMSYS (ChatBot Arena)
85%
support more agentic, time-consuming tasks with minimal input
81%
context window >= 500k tokens
74%
support video input
72%
only available through ChatGPT interface at the beginning
71%
it is named GPT-5
64%
support long-term memory
50%
is released within 1 week of being announced

This question will be resolved when the new model is released. Variations of GPT-4 won't count; only major new models will qualify. The deadline will be postponed if neither model is released by the end of the month.

Get Ṁ200 play money
Sort by:

I really hope that GPT-5 is better at following instructions of style instead of being so stubborn in it's built-in personality, so making custom GPTs is actually useful (other than for accessing custom APIs and data).

Can you add an option to the list of predictions: "has minimal (<5%) hallucinations at <=128k token length"

@PaulJones2733 nice suggestion - when/where are such results normally shared?

@PaulJones2733 You mean, if given a representative sample of prompts of that length that it gets from users, it will hallucinate less than 5% of the time? What are the kinds of prompts, and what defines a hallucination?

@traders I created a similar question for LLAMA 3 and added 1k subsidy -> /Soli/what-will-be-true-of-llama-3-in-the

support video input

In this and all the options about “supporting” different capabilities, how do we interpret the situation where the model is claimed to support it in an announcement, but it’s not available in the first version that users are given access to, like how GPT-4 was announced as supporting image input, but ChatGPT didn’t get image input until some time after.

Also, specifically for video input, does slicing the video into frames like Gemini 1.5 count as supporting video input or does it require some richer form of support?

@GradySimon we would have to wait a reasonable amount of time till we are able to test the specific capability to resolve the markert or rely on reports from people who got beta/early access.

Regarding video input I think the model just needs to be able to discuss a any video file uploaded through the UI

What counts as “long-term memory”?

@GradySimon i guess this would be more on the application layer (chatgpt) and would require the ai assistant to be able to recall information from previous conversations

@Soli This is great! I was happy to find an unlinked market that didn't differentiate on what the model is called, and so I put this up on the dashboard.

Would you consider opening this to submissions from other people?

@Joshua I was worried it would get a bit too messy and neither of the options listed would get any significant trading volume which is why I did not allow submissions from other users. Happy to change that though if you tell me how haha. I tried to do it now for 5min and failed.

@Soli There's a toggle under the menu at the top right, I flipped it for ya. You can always flip it back of course.

@Joshua 🙏

Comment hidden