What will be true of OpenAI's Sora* model, at the end of 2025? [*see description]
352
3.8K
22K
2026
72%
It will be the SOTA for text to video
58%
It can generate videos over 10 minutes long
56%
A second major version of the model has been released
15%
A third major version of the model has been released
13%
It has been renamed
15%
It was accessible to the public before May 2024
7%
Full description of model architecture will be public
91%
By default, the generated videos will be watermarked
84%
It has been referenced in a legal case about deepfakes
11%
It will be free to use
30%
It has a logo separate from the OpenAI logo
12%
OpenAI will release the number of model parameters
38%
It has been integrated as a feature on a major social media platform
8%
It can create a fully coherent short film from a prompt (20-40 minutes)
69%
It was trained on data created in a physics/game engine (eg Unreal Engine)
75%
A competing model has challenged Sora's dominance in the text-to-video space
68%
It will be the most popular text to video tool (determined by google search trends)
44%
costs for an average 1 minute HD (or higher quality) video will be lower than $0.50
27%
it'll be legaly banned in at least one EU country
76%
OpenAI will be sued over the model

Unless otherwise specified, the options are about the state of OpenAI's latest video model at the end of 2025. Options about things that might happen before then, or by a specific date before then, are also acceptable.

I'll N/A duplicates and options I consider to not be valid. If an option refers to an external source, I encourage the option creator to notify me when/if the option should resolve.

For the purpose of this market, any Text-to-Video model released by OpenAI after Sora will count.

Some clarification:

  • "released" counts widespread public release OR widespread access to corporations, through partnerships or wtv else

  • the "Nth" model doesn't require release in the sense mentioned above. Sora is the 1st model, whether or not it ever gets "released". any new model announced after Sora for text-to-video counts as the 2nd model. and so on.

Get Ṁ200 play money
Sort by:
A youtube video made only with Sora will get > 100M views

Latest Sora ‘short film’ released here - https://youtu.be/yplb0yBEiRo?feature=shared

But given most offical openAI videos only get max 2M I doubt it’ll reach 100M

A second major version of the model has been released

Does this mean released to the public or does released to red teamers or even a corporation count

@JamesF good question, I didn't really expect what's happening with the 1st model to happen when i made that option. ig I'll go for:

if users or corporations can widely request access and a large # of em get access, it counts as released

open to feedback / disagreement, like if many ppl saw it differently im willing to consider N/A and remake the option more clearly

Video will include sound

a few things here are brought up. not much that hasn't been mentioned elsewhere though

they say that they won't be making sora available "any time soon"..

bought Ṁ10 It will be pay-per-u... NO

I'm pretty sure it's unlikely to be both pay per use and accessible through chatgpt.

@ProjectVictory Why not? Is that not the case for the current models?

@Bayesian No; if you use Dall-E through your ChatGPT Plus subscription, I don't believe you're charged separately for the image generation.

@apetresc (Unless you mean, like, separately, because you could use DallE-3 through the API as well as use it for free through ChatGPT Plus)

A second major version of the model has been released

Do both models need to be released separately? Or would this resolve YES if they never release the recently demoed version and then only release what is announced as a second version?

It was trained on data created in a physics/game engine (eg Unreal Engine)

Would this resolve YES if the data created in a game engine were not created for the purpose of AI training, e.g. if the training data included a video of a game from YouTube? (I assume not, we already know it was trained on Minecraft videos)

@FH7979e That would not count

it'll be legaly banned in at least one EU country

How does this resolve if it was banned at some point during the year, but was unbanned later in 2024 (a la ChatGPT in Italy)

@CharlesPaul the phrasing means it's asking about whether it'll be banned at the end of 2025. to ask whether it would be banned for any temporary amount of time in any EU country, you could ask "it'll have been banned in at least one EU country for at least some amount of time" or smth

It will be free to use

does “free to use” resolve yes if they have a set up like “each user gets 30 minutes a month of video free, but you have to pay beyond that.” What about if they have a similar situation to GPT3 before Nobember 2022 where you got free tokens for signing up with them, but once they are gone you had to pay?

@Bayesian I guess I'd argue YES in the first example and NO in the second- the difference being that the tokens renew themselves regularly and give the user continued use. Happy to defer to your resolution if it gets complicated.

opened a Ṁ333 It has been renamed YES at 10% order

what if it's free via, like, Bing Chat, and sustainably so, so not just a one-time free amount of tokens?

I would guess that would count as YES

It's interesting that people, given his NO positioon, are essentially betting that @EliezerYudkowsky will change his mind and state during the next two years that the Sora line is a threat to civilization.

I can see this logic, in that this model is the closest to "AGI" (whatever that means) there is now, and there could be some rapid advance that surprises everyone. He would see a capabilities advance very negatively, so I might take YES on this if it were cheaper.

@SteveSokolowski you could have a video model that outputs a perfect ten hour television series exactly to your specifications and i don't think it would be as dangerous as I expect LLMs in a few years to be

@SemioticRivalry I think I said in another market that people are missing the big picture with Sora.

It's not about video; watching movies is a sideshow. It's about Sora generating an internal scene, making predictions about what is going to happen, and moving through it. You can tell it to do a lot more than an LLM, and connect its output to take real world actions based on its time-series predictions.

@SteveSokolowski Training something to generate text can only be successful if that something itself understands human thought, but when generating videos (for most videos at least) you only need to understand concepts like 3d space, time, physics which are much less likely to lead to dangerous stuff like agency, planning, general intelligence than human thought (imo). There are some videos which do require understanding human thought to generate, so (imo) your proposed system will be possible, but I think it will always lag behind text-based model in "AGI-ness".

50c$ per minute video? I think we are off of many order of magnitude, considering that gpt-4 vision costs would cost 1c per frame to just see the video, not generate every single pixel of it. I wouldn't be surprised if Sora, as it is today, requires >1000 GPUs to run for single instance (and maybe over 10K). It might well be not even in production at the end of next year.

By default, the generated videos will be watermarked
bought Ṁ175 By default, the gene... YES

For this does C2PA count as watermarking?

@AnilJason No not if there’s no visual indicator on videos by default

@Bayesian At least that seems like what the definition of watermarking is on google, lmk if you disagree / why

More related questions