EG "make me a 120 minute Star Trek / Star Wars crossover". It should be more or less comparable to a big-budget studio film, although it doesn't have to pass a full Turing Test as long as it's pretty good. The AI doesn't have to be available to the public, as long as it's confirmed to exist.
@Mad2live you don't need to hold the whole film in a single context window. Generate a shotlist and some metadata about your actors, sets, etc. for continuity, and generate shots one by one. A ten second shot is long in modern cinema.
@robm It needed to be the last 10 days to keep up the pace, assuming that was 1% of the way (which I don't assume.)
@DavidBolin I reject your premise of framing progress over just a few days. When this question opened almost 2 years ago, we were still holding our papers over videos like this
We're more than 40% of the way between that and Sonic 3.
I understand this market on this terms: three years from now, all entertainment will be on AI hands. Disney et all will either have bought OpenAI or have sold to them, given any customer can just type a prompt and get exactly what they want. Why bother paying Netflix, Disney+ or any other? If you can make a 120m movie, is it too difficult to make a game from the same or similar prompt? Please destroy my argument, thanks!
@JaimeSantaCruz Time delay, cost for individuals necessitating still paying a distributor and designer, limitations in creative direction, random quality facets, lobbying against it by companies or by workers or general social dislike making it infeasible to bring to market.
But yes, if this happens it does indicate a pretty big upheaval, but I do kinda expect that with AI.
@JaimeSantaCruz there would be a gap between generating a movie and generating the best movie in one query. Until then, Netflix can be an army of "curators" to sift through the piles of slop suggested by "producers".
@ICRainbow Or, an AI can generate it (o6 + Sora(Expensive) ) which costs $10_000 per prompt (so it's better than making a movie traditionally, but every youtuber doesn't do it for every video)
@Lavander This market will hit 30% around the end of next year, when there are 2 years to go.
I think by 2028 there is >50% chance the AI will be able to generate a high-quality world takeover without a prompt 😂 I don't even know what I'll gonna to with all the mana I have by then
@mckiev Yeah I think modeling the world well enough to generate every part of a movie is pretty close to FOOM and my timelines are not this short. I salute you betting on your beliefs though!
@Joshua in my opinion most "hight quality movies" have a very straightforward story line, and the bet is about whether AI will be able to make facial expressions and action scenes good enough
@mckiev I think consistency of characters, (even mildly) cohesive and compelling narrative over ~2 hours of runtime, and smooth integration of dialogue, video, plot, characters, music, etc will be more relevant bottlenecks than facial expressions and action choreography, but the key obstacle, imo, will be for someone to actually make a software that generates this based on a single prompt, which is an essential part of this market's criteria. It's possible/probably that people will be using piecemeal AI tools to make movies by then, but this market is trying to capture whether someone can just type a prompt into a box and hit "Generate" and you get a movie out. In January 2028.
@benshindel and like Joshua said, kudos to you for betting so much on your beliefs! It would admittedly be extremely cool if this was the case by 2028 (and we weren't all dead or in a dramatically uncertain world), and I wouldn't mind losing 100k mana to fiat that into existence
@mckiev I'm not sure what makes you worry about world takeover on such short timescale, LLMs currently suck at displaying agent behavior or following long-term goals.
@benshindel this part I'm not worried about. There are already serious groups working on converting a single prompt to a series of shots, and I think they're doing well given the current state of the generative models.
LLMs currently suck at displaying agent behavior or following long-term goals.
Compared to what? 3 years ago they could hardly form a coherent sentence.
@robm Compared to humans that they are supposed to take over. LLMs aren't optimisers, I see no mechanism for takeover.
@ProjectVictory don't be llm-myopic. An agent is a mishmash of LLMs, scripts, databases and whatnot. o1-preview and sonnet can make plans, break them down and execute piece by piece.
Why do you think agency is so tractable?
It makes sense that capabilities similar to human higher reasoning are quite easy to achieve. We evolved e.g. our mathematical reasoning abilities in the blink of an eye on the evolutionary timescale. We became mathematicians soon after we evolved large brains (of course, there are arguments about causality, but this is where I have landed). Higher reasoning capabilities require a lot of compute to run, but on the scale of evolution they may actually be simple.
But it seems less likely that agency is simple. Evolution had a long time to figure agency out. And the solution that evolution came up with has all kinds of complicated parts we don't fully understand - consciousness, for example, is a known unknown.
Higher reasoning: easy to achieve, but requires big brains
Agency: difficult to achieve, but doesn't require big brains
Agency's a very different kind of thing to higher reasoning abilities. Agentic programmes - once we figure them out - probably can be run on decades-old hardware. Simple animals like rats seem to have agency down pat. So the fact that we have been seeing success with reasoning shouldn't make us suppose that solving agency is right around the corner.
We've been trying to solve agency for decades. We've had LLMs a little while and all attempts at imbuing agency - e.g. AutoGPT - have been total failures. There's no clear path to success. And it's a different kind of problem to the ones we are having success with. Why do you anticipate imminent success?
@benshindel this had me convinced, but upon further reflection, I think an agent will be able to do this soonish. Give it a big pot of money, and tell it to produce a movie. It mainly needs to make a plan that uses other AIs to do all the work: write a screenplay, then generate a storyboard, then render scenes and generate accompanying music, then splice scenes together (as a streaming video manifest file, probably).
Then it's mostly about the subjectivity of "good".
To me, the key thing is that the high-level steps aren't hard to figure out. It can be a formulaic movie, made in a formulaic way. It will probably (accidentally) deviate from a classic "blockbuster" in unexpected ways, but that could be viewed as refreshing rather than a failure.
Why do you think agency is so tractable?
if you have the right RL-like training procedure that trains you to successfully solve long problems it seems like agency would just come out for free. also as they get smarter AIs naturally get better at it, so really scale is all you need. a bit of why agency currently fails is reliability and correcting your previous mistakes, and those have been getting better and are getting better over time.
It makes sense that capabilities similar to human higher reasoning are quite easy to achieve. We evolved e.g. our mathematical reasoning abilities in the blink of an eye on the evolutionary timescale.
I'm skeptical of this line of reasoning where because an ability required large advanced brains, it is somehow simpler to achieve than an ability which didn't require large advanced brains? clearly mathematical ability is less universally necessary for survival than agency is, and that is why agency evolved much earlier. it's not clear to me advanced math ability can't be achieved with very small brains, see eg calculators joke
So the fact that we have been seeing success with reasoning shouldn't make us suppose that solving agency is right around the corner.
ofc not, but it's a big positive update that progress is happening on fundamental LM problems, and the more we scale along one domain the more the marginal research and compute is better spent trying to improve the other. for example OpenAI has planned steps toward AGI which first involved solving reasoning and then solving agency. if they solved reasoning and the limiting factor to the economic usefulness of their reasoning models is now its degree of agency, it is much more likely than before that they would use significant resources to try to solve agency quicker.
We've been trying to solve agency for decades. We've had LLMs a little while and all attempts at imbuing agency - e.g. AutoGPT - have been total failures. There's no clear path to success. And it's a different kind of problem to the ones we are having success with. Why do you anticipate imminent success?
i don't think we should much update on decades without advanced agency, against the hypothesis that we would solve agency soon, bc the speed at which we solve fundamental, economically useful problems has increased dramatically in the last 5 years.
I don't see why you think this is a different kind of problem to the ones we are having success with. I anticipate success in the next 2 years, because it's not that hard of a problem, can be solved, and they really want to solve it, and they showed repeatedly that they can solve longstanding difficult problems.