GPT-4 was announced on March 14, 2023. The corresponding release paper (technical report) references GPT-4 as a "large multimodal model (accepting image and text inputs, emitting text outputs)".
However, currently both the API and the UI playground version of GPT-4 only accept text inputs with image input "coming later".
Resolves to the month when:
Whether by full release or staggered rollout, by means of the API or the UI interface, OpenAI allows at least some portion of the general public access to the image input "multimodal" version of GPT-4 by December 2, 2023.
If OpenAI has not released it by the end of April 2024, then new options will be added accordingly.
November 30th 2022 is the chatGPT release date. If they don’t release it before then an anniversary update would be fitting. GPT-4 itself was released in March, 2023, so an anniversary update to that is also possible, though quite delayed.
For all the end of the month scenarios, timezone: PT will be considered.
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ488 | |
2 | Ṁ165 | |
3 | Ṁ57 | |
4 | Ṁ56 | |
5 | Ṁ0 |
From OpenAI's blog post (https://openai.com/blog/chatgpt-can-now-see-hear-and-speak):
Image understanding is powered by multimodal GPT-3.5 and GPT-4.
And from the description of this market:
OpenAI allows at least some portion of the general public access to the image input "multimodal" version of GPT-4
means that this market will resolve to September 2023.
https://openai.com/blog/chatgpt-can-now-see-hear-and-speak
ChatGPT can now see, hear, and speak
“Plus and Enterprise users will get to experience voice and images in the next two weeks”. OpenAI’s X page further specifies: https://x.com/openai/status/1706280618429141022?s=46
“ChatGPT can now see, hear, and speak. Rolling out over next two weeks, Plus users will be able to have voice conversations with ChatGPT (iOS & Android) and to include images in conversations (all platforms). ”
Directly offered by OpenAI, or does access through a third-party app count if it's the same model and they offer an API or UI?