
Will OpenAI make a fully multimodal LLM in 2024?
50
941Ṁ3486resolved May 13
Resolved
YES1H
6H
1D
1W
1M
ALL
Will OpenAI reveal a fully multimodal chat LLM in 2024, including at least 4 of the following multimodal capabilities: speech input/generation (TTS), image/video input/generation.
The LLM needs to have multimodal capabilities built-in and cannot be using another model, for example, GPT-4-Turbo in ChatGPT Plus using Dall-E 3 to generate images does not count, while GPT-4-Turbo taking images as input does count.
The model does not need to be released to the public just be revealed.
ps: I will be participating in this market
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ1,116 | |
2 | Ṁ69 | |
3 | Ṁ57 | |
4 | Ṁ54 | |
5 | Ṁ50 |
People are also trading
Related questions
Will OpenAI give their new LLM an anthropomorphic name?
9% chance
Will OpenAI announce a multi-modal AI capable of any input-output modality combination by end of 2025? ($1000M subsidy)
83% chance
What will be true of OpenAI's best LLM by EOY 2025?
OpenAI to release model weights by EOY?
83% chance
Will OpenAI's next major LLM release support video input?
37% chance
Will OpenAI release true multimodal image generation for GPT-4.5 before 2026?
16% chance
Will OpenAI's next major LLM (after GPT-4) feature natural and convenient speech-to-speech capabilities?
80% chance
Will the most interesting AI in 2027 be a LLM?
70% chance
Will the next major LLM by OpenAI use a new tokenizer?
77% chance
Will xAI develop a more capable LLM than GPT-5 by 2026
51% chance