Video game companies are dipping their toes into the rapidly-evolving world of generative AI with behind-the-scenes development. It may soon be applied to actual gameplay.
A recent example came earlier this month from the Seattle-based creators of the popular social app Rec Room, which debuted Fractura, an in-game room created as a “research project” to demonstrate how players can use generative AI to create their own content for Rec Room.
The Seattle startup evaluated more than 20 different tools in the process of designing Fractura, such as ChatGPT to develop and iterate on their ideas, including the creation of a “bible” for Fractura’s backstory. It visualized ideas via Midjourney and DALL-E, and turned the resulting images into 3D assets with CSM and Shap-E. Finally, Fractura’s alien skies were built out with Skybox, an AI tool from Blockade Labs.
“Despite some challenges,” Rec Room wrote in a blog post, “the future for 3D GenAI looks bright, and the team looks forward to further explorations!”
Like anywhere else in tech, the debate over AI has been one of the running themes of 2023 in the games industry. Much of that debate, however, has been crowded out of the limelight by other concerns, such as a year-long wave of layoffs that has cost roughly 10,000 developers their jobs since January.
When work in the field is already unstable, no developer that I’ve spoken to has been enthusiastic about the advent of technology that could be (read: inevitably will be) used to automate and eliminate somebody’s position.
However, generative AI is already helping some studios behind the scenes.
By giving ChatGPT access to the code base for its free-to-play arena shooter RoboSquad Revolution, New York-based Zollpa has turned it into a useful assistant for playtesting, training, code analysis, and integrating new programs.
Zollpa also trained AI to go through its recorded gameplay footage and pick out particular clips based upon recognizable keywords, such as “bug” or “Steam Play.” The AI can then organize a list and assign the potential issue to the most relevant members of the team.
“It’s definitely helped with our productivity in general, so we don’t have to watch and rewatch our gameplay videos,” said Joey Thigpen, co-lead developer.
There is an important distinction to be made between using generative AI to accelerate development processes, and using the technology to make a game with content generated by AI, said David Ryan Hunt, CEO of Noodle Cat Games in Salt Lake City.
“We’ve put a lot of attention on, ‘Oh, I made a cool picture by typing a few words!’” said Hunt. “But the reality of game development is that the majority of the work that goes into making a product is unseen.”
As Hunt notes, AI art and prose are what get all the attention in this conversation, and they also carry legal and ethical challenges.
Plagiarism stew
Generative AI tools are trained on data, much of which is scraped from publicly available online sources, which can include identifiable snippets from copyrighted work. Despite how it looks, and how it’s sold, publicly available AI isn’t creating anything; it’s grabbing everything it can find and boiling it for soup.
Professional artists such as Greg Rutkowski have discovered their online portfolios have been “harvested” by AI tools, which has flooded the internet with artwork that vaguely resembles their own styles. Some have gone so far as to fight back with tools like Nightshade, which “poisons” online artwork in ways that disrupt AI models using it as training data.
Several companies have preemptively banned AI tools for use in related projects. Valve Software made a few headlines in June when it rejected a game that was submitted for publication via its online storefront Steam, on the basis that it had been made using AI.
The game developer took to Reddit to complain on the subject, which spurred a public statement from Valve in response. According to Valve, it wasn’t the AI tools themselves, but rather, the significant risk that material produced with those tools has likely violated someone’s copyright.
“The introduction of AI can sometimes make it harder to show a developer has sufficient rights in using AI to create assets, including images, text, and music,” a Valve representative told GeekWire in July.
Valve continued: “In particular, there is some legal uncertainty relating to data used to train AI models. It is the developer’s responsibility to make sure they have the appropriate rights to ship their game. …While developers can use these AI technologies in their work with appropriate commercial licenses, they can not infringe on existing copyrights.”
Bellevue, Wash.-based Valve has consistently taken a zero-tolerance stance on any new technologies that could cause legal issues. It previously banned “play-to-earn” games on Steam in 2021, in which users could generate small amounts of cryptocurrency or collect special NFT drops by playing games.
Meanwhile, one of Steam’s competitors, the Epic Games Store, has stepped in as a storefront that will take Web3 projects, and as of September, may also be open to hosting AI-developed games.
AI content generation
It’s worth restating: the problem with generative AI in video games, at least for the time being, isn’t inherent to the tools. It’s typically with the data that’s been used to train those tools, and how the tools regurgitate that data. An AI that’s simply trained using whatever data it can find can pose some serious risks, up to and including the possibility of having illegal material in its dataset.
If best practices are pursued, though, generative AI has a number of potential applications in game design, particularly in the field of content created by users.
Microsoft made headlines earlier this fall through its partnership with Inworld.AI, which promises to bring unspecified “AI-powered” upgrades to future Xbox games. In theory, AI could remove certain current limitations in game narrative, such as removing the need for generic NPC dialogue. Instead of having a few scripted lines and coded behaviors, AI-driven characters could react realistically to whatever happens to or around them.
For a more specific example, there’s Adventure Forge, scheduled for release on Steam Early Access in early 2024. It’s a no-code toolset by Seattle-area company Endless Adventures, which players can use to create their own original games, ranging from dungeon crawlers to visual novels.
This includes a built-in Stable Diffusion system by Scenario.GG that has only been trained on art assets that were created for the project by Endless Adventures. Everything that could be created for Adventure Forge with its built-in generative AI, such as new maps or in-game props, would thus fall under Endless Adventures’ copyright.
Adventure Forge will ship with over 1,500 environmental assets for players to use in their games, such as floors, walls, and furniture. In previous generations of homebrew software, those assets would be all users had to work with for the duration. With Scenario.GG’s system, though, players can simply create new Adventure Forge assets on the spot.
“This is not technology for technology’s sake,” Endless Adventures CEO Jordan Weisman told GeekWire. “This is from people whose whole lives have been about storytelling, and we want to empower storytelling, but we want to do it in a responsible way.”
Reality check
The advancements from generative AI for game studios, for now, may not be massive.
“A lot of the capabilities that we have now are not very far removed from procedural generation and various types of algorithms that we’ve been using in games for decades,” said Hunt.
You’ve likely played a video game that uses procedural generation. These are created with a number of handcrafted assets, which are assembled algorithmically into a randomized order. The most famous example may be the dungeons in Blizzard’s action-RPG series Diablo, which are different every time you play the game.
The distinction between procedural generation and generative AI may seem academic from a player’s perspective, but under the hood, they’re doing two very different things. If you picture game design as a set of building blocks, then procedural generation rearranges building blocks according to a pre-programmed algorithm. Generative AI tries to create new blocks there on the spot, based on the data it’s accumulated on how blocks are supposed to work.
“We automate all sorts of different pieces of development,” said Hunt, who previously worked on Fortnite at Epic Games. “It’s not specific to AI. It’s just how the industry works.”
Current generative AI, from Hunt’s perspective, is a more advanced version of automated technologies that have already been in use in game development. Under the hood, many of the use cases for AI in game development are a simple step up from understood best practices, he said.
“There’s a big gap to me,” Hunt said, “between the discourse around AI and the practical reality of what it’s likely to look like over time.”
Over the course of the last year, bad actors have used various AI/LLM tools to create everything from quick cash-grabs, such as churning out nonsensical children’s books or flooding a sci-fi magazine with submissions, to scams that involve cloning someone’s voice in order to extort money out of their relatives.
In the games industry, there are already stories of people using the same tools to turn out simple games in a week or less. The concern is that we’re on the verge of a dark age of auto-generated machine garbage, in a business where many open storefronts already have real issues with “troll games.”
It’s likely that as its usage increases, many of the problems with AI will get worse before they get better. In the meantime, that only makes it more important to find ways to filter the signal from the noise.