Making Games More Real, Artificially
by Noel Jose, on 23/03/2025
In season 1, episode 42 of Phineas and Ferb titled “Out of Toon”, the boys build the PF 5000 Animatron. With this machine, all one has to do is go up to it, get scanned, and a completely artificially generated character with a complex backstory and story beats are created in seconds. Although these kids invented it back in 2008, it’s no stretch to say we’ve almost reached their height of technology in 2025, all thanks to rapid advancements in Generative AI.
At the top of it all, NVIDIA seems to be the reigning authority when it comes to Gen AI in games. At CES 2025, NVIDIA announced a serious boost to their already impressive ACE suite, a set of RTX-accelerated digital human technologies released in 2023 to take conversational NPCs and agents to the next level. Now, they’ve gone a step further and are expanding ACE from conversational NPCs to autonomous game characters that use AI to perceive, plan, and act like human players. According to their blog post on January 6th, 2025, “Powered by generative AI, ACE will enable living, dynamic game worlds with companions that comprehend and support player goals, and enemies that adapt dynamically to player tactics.”
This implies that only a few years down the line, you can have an entire 100 player battle royale, with a non-zero chance of you being the only real player in the game.
Hopefully it doesn’t get to that, as for now it seems to be “AI Teammates that can join your party, battle alongside you, finding you specific items that you need, swapping gear, offering suggestions on skills to unlock, and making plays that’ll help you achieve victory.” (from the same blog post)
Wemade Next partnered with NVIDIA to showcase this technology with “Asterion”, a revolutionary AI boss fight. Unlike regular boss fights in games with standard attack patterns and traditional combat, only straying due to randomness in the terrain, Asterion, a resident of MIR5, built on Unreal Engine 5 is a boss who boasts to learn from the player’s actions. He will remember previous encounters and strategies used effectively against him, thereby changing his patterns every time. However this only adds on the current issue of every game requiring more than a supercomputer to render a single zone, and when it comes to MMORPGs, the intended audience for Asterion, the game would require every player to have top of the line technologies. 1,2
With the RTX 50 series coming out and DLSS 4 (Deep Learning Super Sampling) releasing, it seems like more of your game will be artificially generated than they are rendered. DLSS Multi Frame Generation creates up to 3 generated frames per rendered frame, insanely boosting FPS. In theory, this seems revolutionary, but users report “ghosting” in their visuals; i.e weird or abstract details that appear out of nowhere, caused due to existing issues in AI image generation. They’ve tried to combat ghosting by switching from previous convolutional neural networks to transformer-based models, but only further updates will tell if they can truly remove the issue altogether.3
Okay that’s enough about NVIDIA (they still haven’t made a comeback in the S&P 500 anyways, jk don’t sue). Decart AI made waves in December 2024 with Oasis, an “AI-built open-world game”, or as I call it, a schizophrenic’s Minecraft.
Oasis refines noise patterns and player context into images in 0.04 seconds, achieving a “smooth” 20 FPS. Its transformer-based architecture includes:
- An input layer for player commands and environmental data
- A context layer for short-term memory
- A diffusion layer for visual generation
- An output layer for consistency
Diffusion Transformer
Despite its innovation, this tech is in its infancy, with a 24-frame context window and 500 million parameters. It is said to be trained on several hours of Minecraft gameplay, but its public demo reveals flaws, entities and inventory items morph unpredictably, shattering your immersion in the “game”. 4,5
Although still impressive, if say Microsoft or NVIDIA were to work on similar tech, this could pose a real risk to game developers.
In a world where technology continually reshapes our digital playgrounds, the fusion of generative AI with gaming is crafting experiences that are as unpredictable as they are immersive. From NVIDIA's cutting-edge DLSS 4 and ACE enhancements to Decart AI's experimental Oasis, the industry is venturing into uncharted territory—pushing the limits of both performance and creativity. Considering all these developments, you probably thought I generated half this article with ChatGPT myself. Well you'd be wrong. It was only this last paragraph. I couldn’t fluff out a paragraph this much even if I tried.
That brings us to the ethicality of it all. Where should we draw the line in AI use in such a creative medium that is gaming? Automating granular tasks could speed production and free developers to spend more time creatively ideating. But in humanity’s quest for efficiency, do we lose out on the humanity of it? It takes only a second to generate a similar essay on the evolving tech landscape of gaming with humanity’s brainchildren like ChatGPT and Gemini.
But there’s something more genuine about sitting in an Operating Systems class, not listening to the professor, ignoring the pending doom of your ISAs, and typing out your thoughts yourself. I’d like to envision a world where generative AI is a tool for game development, like an animation aid or a texture creator, and not the pièce de résistance itself.