To qualify, a video game must have its graphics rendered by AI in real time. An AI should be rendering graphics at least 50% of the time in an average playthrough. When the AI is rendering graphics, it should generate at least one image per second. The images should generally depend on the very recent actions of the player - for instance, the player character immediately starts moving left when you press the left button.
It's okay if certain parts of the game are not AI-rendered, like UI elements. However, it doesn't count if the AI is just starting with e.g. a 3D mesh rendering and postprocessing it to make it prettier or fill in frames. It has to generate the video more-or-less from scratch. Specifically, I'd look for a system that generates visual elements like characters and scenery during gameplay and simulates their motion, doing this entirely through ML rather than programmatically, similar to OpenAI's Sora. It's okay if the AI starts with a premade textual or visual prompt, but has to be able to generate and animate new visual elements.
The following is speculation, not resolution criteria: What I'm imagining is a very open-ended game where the AI makes up new landscapes and encounters for you on the fly, sort of like the Mind Game in the book "Ender's Game." This seems doable by a very fast version of OpenAI's Sora, where you just ask it to generate video game footage on-the-fly and conditional on the player's button presses.
Markets with the same resolution criteria
/CDBiddulph/will-a-video-game-with-airendered-g-3b1136cb8e7c
/CDBiddulph/will-a-video-game-with-airendered-g-30dbd8e5eede
/CDBiddulph/will-a-video-game-with-airendered-g-2e37a190fefe
/CDBiddulph/will-a-video-game-with-airendered-g (this market)
@traders This market has resolved, but if you'd like to bet on the number of paid users of AI-rendered video games, please take a look at When will AI-rendered video games have 1 million paid users?
6 years from 2025-2030, all conveniently located in one market!
Resolving YES due to https://oasis-model.github.io
See https://manifold.markets/CDBiddulph/will-a-video-game-with-airendered-g-2e37a190fefe#531sypqnmz3 for discussion
@traders Here are two more versions of the same market. Both of the markets I made are trading higher than I expected, so I'm curious which year Manifold thinks we'll hit the 50% probability mark.
/CDBiddulph/will-a-video-game-with-airendered-g-3b1136cb8e7c
/CDBiddulph/will-a-video-game-with-airendered-g-30dbd8e5eede
@traders I made a version of this market that closes in 2030 rather than 2038: https://manifold.markets/CDBiddulph/will-a-video-game-with-airendered-g-2e37a190fefe
@TimWeissman Sorry to pile on with the questions, but another possible edge case arising soon is with video games rendered using neural radiance fields. They use deep learning but I wouldn't really call them 'AI'.
@TimWeissman @1941159478 I think the part of the description that disqualifies both of these examples is that it "has to generate the video more-or-less from scratch." I edited the description to add more clarification that I think should address these questions more clearly.
description is very confusing. What you describe is generation, but you still use word render. I initially thought this is about AI creating open-world game in real time with all details. I don’t understand why you think “rendering” part should be done by AI. Why not expect AI to generate game world models instead? Otherwise its asking a very inefficient process to run in the real time. Even if doable nobody would waste their compute resources on that.
@RedderThanEver Sure, it's possible that there will be games by 2038 that generate 3D models with AI on-the-fly, but that's not what this question is asking about. By "render," I just mean the kind of thing that OpenAI's Sora does when it generates a video in response to a prompt.
I think in some sense it would actually be harder to use AI-generated 3D models, because then you have to work with code that moves the 3D models around, and either that code has to be written by the AI (opening the way for random bugs that crash your game) or it has to be pre-written by humans (limiting the game's flexibility).
As a concrete example, Sora can already generate Minecraft videos. I think it would be a lot harder to use AI to generate an entire Minecraft scene that looks as realistic as those videos through 3D models and code. Rendering everything from the bottom up with lets you use its raw spatial intelligence without worrying about edge cases - even if you sometimes have weird stuff like pigs flying away from you and objects disappearing, it could work pretty well overall. Something something bitter lesson.
Of course, the main question is whether we can do this efficiently enough for consumers to run the game affordably and in real time. The answer is obviously no right now, but I think a lot can change by 2038.
@CDBiddulph I know there are problems with code; but by making problem domain narrower we can easily work with ai. Imagine this scenario:
The game already uses a powerful game engine such as unreal. Which has fascinating coverage of a lot of possibilities be it complex objects, their interactions or the physics of it. The problem with most games is that programming all of these its so incredibly time consuming. But AI generating those models on the fly, and estimating physics of the interaction would make this very easy. A lot of things cannot be programmed also because by the nature of a game very unexpected scenarios can happen. Imagine a developer makes a game with a scenario where a character might sit on the couch. Now imagine the deformation of the couch will depend on the characteristics of the person sitting. Or now imagine characters decides to shoot at the couch. AI can easily decide what physical deformations should happen. But programming so many scenarios will be impossible by hand.
Coming back to Sora case, you will at best create frames in seconds. Where players expect 100fps with sub 10ms latency. Sora will make it like you re playing on the moon trying to connect to dialup connected win98 machine in your grandmas basement.
@RedderThanEver Yes, I think it would be impossible to code for all of those scenarios by hand, but I think it would also be very difficult for AI to code those scenarios on the fly.
I don't think AI video generation will be so limited by framerate by 2038. Anyway, if it's a slow-paced exploration game, it's probably fine if the framerate is low.
I just made a related market to try to address this crux (resolves in 2030, and not exactly the same as real-time generation that reacts to e.g. game controls, but I think that market should be a good indicator of whether that would be viable by 2038).