EG "make me a 120 minute Star Trek / Star Wars crossover". It should be more or less comparable to a big-budget studio film, although it doesn't have to pass a full Turing Test as long as it's pretty good. The AI doesn't have to be available to the public, as long as it's confirmed to exist.
"I just recreated iconic scenes from Game of Thrones using Veo 2 from
@GoogleDeepMind! How many can you recognize? We can finally make the ending we deserve."
https://x.com/ammaar/status/1869433711869211034
.
I'm thinking of switching to NO. We're not going to get a model that outputs a merely comparable to big-budget studio films.
"Everything here is 100% generated w/ Google Veo 2. I've got early access, and the visual fidelity and prompt adherence is genuinely nuts."
https://x.com/bilawalsidhu/status/1868846342958989609
@RiskComplex The progress in the last 12 months is asinine. And we have another 3 years of ever accelerating progress.
The progress in the last 12 months is asinine
@RiskComplex You are so close. Your subconscious is already getting it.
@skibidist Listens carefully to subconsciousness... "Turns out that scaling really IS all you need!"
Wow, you're right - thanks!
Okay, at this point, I think it's important to really ask what is considered "high-quality movie"?
Are we comparing it to quality of Hollywood movies of 2028 (which would presumably also benefit a lot from AI), or the way Hollywood movies were in 2022-2023, or even classics like Star Wars Episode IV?
Is it okay for people to notice obvious signs that movie is AI made, as long as they don't think it affects quality of experience?
"It should be more or less comparable to a big-budget studio film" As opposed to low budget? Like, what is the difference between high budget and low budget films, that is relevant to AI movies? Spectacle or the "feeling" you are watching a blockbuster?
How reliable does it have to be? Does it have to consistently produce movies on that level or more like, user is expected to get quality movies the same way users today are expected to get quality pictures (meaning, play around with prompts and settings, generate many, and pick the best one)?
If it can make movies that look good in some genre, but is very limited in types of movies or genres it can make, does that count?
Basically, what are expectations for the kind of movies it can create? Like, even with things like current Sora, one could imagine a human stitching a lot of scenes in a mostly seamless (okay-ish) way, as long as the protagonist is something like Detective Pikachu, and if the actual story is really good enough to carry the movie. If Sora was able to create okay scenes that are 1-2 minute long, you would need maybe 80 videos, which should be doable, if scenes are created by an LLM which is aware of the limitations Sora has, and compensates for it by clever writing.
My understanding is that movie has to be easy to create by a user who plays around with prompts and settings, and has to be "watchable" in such a way, that before ChatGPT and Sora, you could show the movie to someone or play it in Cinema, and people would watch the movie without massively walking out of theater, complaining it's a scam. Maybe give it >=6 rating on IMDB on average.
Does that sound about right or is the expectation for it very different?
Obviously, I'm biased, but I think it would be good if you could clarify more what your expectations are. Because I expect that the answer depends a lot on criteria used.
@JonTdb03 "people would watch the movie without massively walking out of theater, complaining it's a scam." We don't share the same understanding of what "High quality" means.
@Zardoru In this context, I don't think there is much difference between a "High quality" and "Low quality" movie. To a large degree, differences between good and bad movies are a result of a bunch of things. Sure, picture quality and effects, but also script, pacing, acting, and music. And then there is a bunch of things that aren't a matter of the movie itself, but affect whether people like it, such as when it was released, where, how much hype there was for it, is it a sequel to a famous or highly anticipated movie, what it was marketed as, and what expectations people had for it.
But sure, if the criteria for high quality movie has to be that it would achieve >=8 score on IMDB, that should be made clear.
@JonTdb03 the creator made this as part of a 5 year prediction 2 years ago, and is unlikely to come on Manifold and clarify, but if you're familiar with his work and his blog you'll know that he's a very thoughtful person and is unlikely to resolve this on a whim. I'd guess that small variations in the quality of the movie are unlikely to be relevant for resolution: there's probably a 99% chance that in January 2028 we either absolutely do not have anything like this market would require, or we have AGI that can readily create movies of high quality. In those 1% of scenarios where we have an edge case though, I think you can trust this creator.
@JonTdb03 level devil The creator made this prediction as part of a 5-year outlook two years ago, and it’s unlikely he’ll come onto Manifold to clarify. However, if you're familiar with his work and blog, you’ll know he’s a very thoughtful person and would not make decisions on a whim. I’d estimate that small differences in the quality of the movie probably won’t affect the outcome. There’s about a 99% chance that by January 2028, we’ll either have nothing that meets the criteria for this market, or we’ll have AGI capable of easily creating high-quality movies. In the rare 1% where we have an edge case, though, I believe you can trust this creator.
@RiskComplex Yeah three more years and you can barely make a 5 second clip.
Ask yourself, as Altman would put it, "if you believe in your heart" that we are 25% closer to full length high-quality movies than we were a year ago.
@DavidBolin Regardless if we're talking about LLMs, GANs or any other architecture the one thing that's clear is that progress is anything but linear. In 2018 machine learning experts thought it would take ~30 years before we had an AI capable of writing a novel. What they didn't take into account is that each progressive increase is larger and takes less time than the previous one. You're making the same mistake.
@RiskComplex I am extremely confident AI will not be replacing programmers in the next five years.
I am even more confident, if possible, that if programmers are EVER replaced, all jobs whatever, including physical ones, will be replaced in 3 years after that at most, if not before.
@DavidBolin Devin released a $500/month version that does 100% of what a trainee could do, ~80% of what I need a jr dev to do, and ~20% of what a mid-level dev can do. Within a year it will be doing ~100% what a jr dev does and maybe ~40% of what a mid-dev can do. I do not need a trainee or jr dev moving forward and will not be hiring them moving forward. Other employers in similar situations seem to share my sentiment (take that for what you will).
So when you say it will not be replacing programmers in the next five years, do you mean the senior/staff/principle engineers? Or do you think that my comments about trainees/jr/mid- dev's are somehow incorrect or not indicative of the broader picture, and that their jobs are safe as well?
If a year from now we look at the employment numbers of jr dev's and see a significant decreaes compared to general employment numbers (across industries not impacted by AI for example), will you admit that you're wrong, or will you move the goal posts and claim that those aren't real programmers and you were talking about staff engineers with +10 years of experience.
@RiskComplex 80% of a jr dev, this is worse than useless. Give that 500$/month to mid-level and senior as a bonus, it would be more effective.
Some companies will be tempted to layoff devs, expecting productivity gains of remaining ones with AI. Not sure it's a good idea.
What is certain is software testers will have a ton of work, as software using AI will generate endless bugs.
@Zardoru My experience so far has been is that 1 jr dev with Devin has the output of 5-6 jr devs without Devin. What you're describing ('worse then useless', 'endless bugs', etc) is just a user problem. If you don't understand what you're doing and you're careless or not using best practices you'll have a ton of problems regardless if you use Devin or go manual. The only difference is that you're much slower and get tired infinitely faster.
@RiskComplex Specifically, it is incorrect even about juniors.
And no, I will not say "I was talking about devs with 10 years of experience."
If you only hire devs with 10 years of experience, very quickly you will have no one at all to hire, for obvious reasons.
Companies will not make that mistake.
@DavidBolin I'm not sure what you mean when you say, 'it' is incorrect even about juniors? What is? Are you saying that my comments about Devin being able to do 80% of what a junior dev can do is incorrect?
You don't have to take my word for it:
https://x.com/mckaywrigley/status/1866598700644176240
8 examples from Ethan Molick: https://x.com/emollick/status/1770128785494700333