In May of 2024, I had the opportunity to give a guest lecture at UC Santa Cruz for their Game AI course, taught by Dr. Batu Aytemiz. This post is a quick summary of the talk, which did not focus on the models and architectures for AI agents in games, but rather how to implement agents in a practical and easy manner. Pulling from our own experience at Regression Games working with studios to implement agent systems, I’ve pulled together core principles and tips to follow for a pain-free experience implementing NPCs and bots within your game.
You can see the full presentation on Google Slides and the lecture recording on YouTube.
-- Aaron Vontell, Founder of Regression Games
Building agents for games is hard. If you are a game developer, you may have experienced the pain of adding an NPC to your game, whether it be for interacting with players in combat, automating test scenarios, or building an engaging experience for multiplayer games. The inherent complexity of games makes agent implementation tricky. Let’s jump into discussing some principles and tricks to ensure that your agent implementation is smooth and easy!
When developing a game, it's crucial to think about agents early on. Imagine what use cases you may need agents for and define the behavior of these agents precisely.
The complexity of a game grows quickly during development - before you know it, core game logic, multiplayer networking, animation rigging, event systems, external APIs, and scene management can pile up. Think about these aspects of your project as you build to ensure that agents are easier to integrate:
Example: In Unity’s Boss Room project we wanted to implement agents for testing. Due to the way networked clients are mapped to characters in the game, a lot of work was needed to decouple this logic and allow local NPCs to “connect” as real players.
Too often, developers will attempt to implement complicated agents to solve a use case. Start with the goals of your agent and work backwards until you arrive at the simplest algorithm or approach that will get the job done. In a lot of cases, you will realize that your simple NPC does not need to be trained via reinforcement learning - it may be the case that a simple rule-based approach or behavior tree can get you 90% of the way there!
In general, try to follow this process:
As an example, let’s imagine a match 3 puzzle game, and you’d like an agent that can evaluate the difficulty of a level by attempting to solve it. You could train an agent using imitation learning, or an RL agent that continuously plays many levels and uses level completion as a reward function. However, wouldn’t randomly selecting a swap on the board and taking that move be a good enough start?
Understanding the state and action space available to an agent is vital. Different types of state representations and action abstractions can impact not only how good the agent is, but also how difficult it is to integrate and build the agent in the first place.
Consider which of the following state spaces makes sense for your use case:
If you are training an RL agent, maybe screenshots make sense. If you are building a behavior tree, direct game state might be better. The key point here is that it varies from game to game and agent to agent - take some time to really think about it!
The same applies to actions - you need to consider whether direct key inputs or function calling is better. Additionally, you’ll need to think about how abstracted those actions are, if your action space is continuous or discrete, and how often your agent can take actions.
As a quick example, imagine a farming simulator game, and you want to build an agent that will harvest a plant. In order to get to and harvest a plant, your action space can take many forms:
Many of these may work - think about these actions in the context of your agent goals and game setup, and then decide which is best.
Especially for multiplayer games, the architecture of your game affects where agent code should run. For instance, if your agent will act as a player in a multiplayer game, does that agent run on the server, or on the client? If your game is peer to peer, what if the client running the agent disconnects? How does ownership of the agent code get transferred? Finally, in single player games, what if the agent relies on large systems that cannot be handled on-device, such as LLMs?
From our experience, we would recommend the following:
Wherever you do decide to run your agent code, make sure to understand the implications it will have for performance on player devices.
Evaluation should connect back to your agent goals. Create environments focused on both normal and edge cases and collect real data to iterate and improve your agents.
Example: In a racing game, you might measure lap finish times, the number of collisions, and time spent turning to evaluate an agent’s performance.
The principles above highlight the main considerations to think about when developing a game that can support agents. Here are a few more tips and tricks that we have found helpful in our own development.
1. Expose Extra State and Actions: Provide agents with additional information like inventory data and ability cooldowns. Even if a player can’t view certain game information, that doesn’t mean an agent shouldn’t be able to!
2. Grant More Power: Allow agents to perform actions beyond human player capabilities for testing purposes. For example, if your agent can be implemented more easily if it could jump 10% higher than the player, then allow it to do so.
3. Implement Debug Tools: Use in-game overlays and other tools to debug agent behavior.
4. Leverage Built-in Tools: Utilize existing tools and frameworks to streamline agent development, such as navigation meshes and behavior tree tools.
Early and thoughtful integration of AI agents will make your life much easier.
By following these tips and leveraging the resources available, your experience implementing an agent will be far less painful than it could be, we guarantee it.
For more insights and resources, visit our website at regression.gg or follow us on Twitter. Thank you for reading!