Artificial intelligence can’t make good video game worlds yet, and maybe never will

This is it Stepbacka weekly newsletter discussing one major story from the world of technology. For more news on the video game industry’s shift against generative artificial intelligence, follow Jay Peters. Stepback will arrive in our subscribers’ mailboxes at 8:00 a.m. ET. Sign up for a subscription Stepback here.

Long before the generative explosion of artificial intelligence, video game developers were creating games that could generate their own worlds. Think titles like Minecraft or even the 1980 original Snape this is the basis for the term “roguelike”; these games and many others create worlds on the fly with certain rules and parameters. Human developers work hard to make sure the worlds their games can create are engaging to explore and full of things to do, and at best these types of games can be replayed for years because the environments and experiences can feel fresh each time you play.

But just as other creative industries are pushing against the future of artificial intelligence, generative AI is also coming to video games. Although it may never catch up to the best that people can do now.

Generative AI in video games has become a lightning rod, players have gone crazy with in-game sloppiness, and half of developers think generative AI is bad for the industry.

Big video game companies are jumping into the murky waters of artificial intelligence anyway. PUBG maker Krafton is turning into an “AI First” gaming company, EA is partnering with Stability AI for “transformative” game creation tools, and Ubisoft is promising to “rapidly invest in player-focused generative AI” in a major shake-up. The CEO of Nexon, which owns the company that produced last year’s megahit Arc Raiderput it perhaps most ominously: “I think it’s important to assume that every game company is using AI now.” (Some indie developers disagree.)

Bigger game companies often present their commitments as a way to streamline and help with increasingly expensive game development. But the adoption of generative AI tools is a potential threat to jobs in an industry already notorious for waves of layoffs.

Last month, Google launched Project Genie, an “early research prototype” that allows users to create sandbox worlds using text or image prompts that they can explore for 60 seconds. Currently, the tool is only available in the US to people who subscribe to Google’s AI Ultra monthly plan for $249.99.

Project Genie is based on Google’s Genie 3 world-class AI model, which the company touts as a “key stepping stone on the road to AGI” that can enable “artificial intelligence agents capable of real-world reasoning, problem-solving, and action,” and Google says the model’s potential uses “go far beyond gaming.” But it got a lot of attention in the industry: It was the first real indication of how generative AI tools could be used for video game development, just as tools like OpenAI’s DALL-E and Sora showed what could be possible with AI-generated images and video.

In my testing, Project Genie was barely capable of generating even remotely interesting experiences. The “worlds” don’t allow users to do anything other than move around using the arrow keys. After the 60 seconds are up, you can’t do anything with what you’ve created except download a record of what you’ve done, which means you also can’t plug what you’ve generated into a traditional video game engine.

Sure enough, Project Genie allowed me to generate horrible unauthorized Nintendo knockoffs (apparently based on online videos of Genie 3 being trained on), which raised a lot of familiar concerns about copyright and AI tools. But they weren’t even in the same universe of quality as the worlds in Nintendo’s handcrafted game. The worlds were quiet, the physics were sloppy, and the environments felt rudimentary.

The day after Project Genie was announced, the stock prices of some of the biggest video game companies, including Take-Two, Roblox and Unity, fell. This resulted in little damage control. Take-Two president Karl Slatoff, for example, strongly dismissed Genie on an earnings call a few days later, arguing that Genie was not yet a threat to mainstream games. “Genie is not a game engine,” he said, adding that technology like it “definitely doesn’t replace the creative process” and that the tool looks more like “procedurally generated interactive video,” he said. (Share prices rose again in the following days.)

Google will almost certainly continue to improve its Genie world models and tools for creating interactive experiences. It’s unclear whether it will want to improve gaming experiences or instead focus on finding ways to help Genie with its aspirational march towards AGI.

However, other AI company leaders are already pushing for interactive AI experiences. xAI’s Elon Musk recently said that “real-time” and “high-quality” video games that are “personalized” will be available “next year” and in December said that building an “AI game studio” is a “major project” for xAI. (As with many of Musk’s claims, take his predictions and timelines with a grain of salt.) Meta’s Mark Zuckerberg, now pushing AI as the new social media after the company cut jobs in its metaverse group, envisions a future where people create a game out of a challenge and share it with people on their channels. Even Roblox, a gaming company, is demonstrating how creators will be able to use AI world models and challenges to generate and change game worlds in real-time, something called “real-time dreaming.”

But even in the most ambitious view, where AI technology is capable of generating worlds that are as responsive and interesting to explore as a video game that runs locally on a home console, PC, or your smartphone, there’s a lot more that goes into creating a video game than just creating a world. The best games have engaging gameplay, interesting content, and original art, sound, writing, and characters. And it sometimes takes years for human developers to make sure all the elements work together properly.

AI technology isn’t ready to generate games yet, and anyone who thinks it could be is fooling themselves. But AI-generated video is still bad, and it was still used to make a lot of bad Super Bowl commercials, so tech companies will probably still put a lot of effort into games made with generative AI. In an already volatile industry, even the idea that AI tools can compete with what humans can produce could have massive implications.

However, the complexity of games is different from AI video, which has improved significantly in a short period of time, but has fewer variables to consider. AI game creation tools almost certainly will, but the results may never close the gap to what humans can improve upon.

  • In a lengthy X post, Unity CEO Matthew Bromberg claims that world models aren’t a risk, but a “powerful accelerator.”
  • While the video game industry probably shouldn’t feel threatened by AI world models just yet, generative AI tools will continue to be controversial in game development. Even Larian Studios, beloved for games like Baldur’s Gate 3it is not immune to backlash.
  • Steam requires developers to disclose when their games use generative AI to generate content, but in a recent change, developers don’t have to disclose whether they’ve used “AI-powered tools” in their game development environments.
  • Some games, such as text games Hidden door and Amazon’s Snoop Dogg game on cloud gaming service Luna are embracing generative AI as a fundamental aspect of the game.
  • NYU game professor Joost van Dreunen has his take on the situation surrounding Project Genie.
  • ScientificAmerican has a great explanation of how world models work.
Follow topics and authors from this story to see more of these in your personalized homepage feed and receive email updates.


Leave a Comment