In video games, artificial intelligence (AI) is used to generate responsive, adaptive or intelligent behaviors primarily in non-playable characters (NPCs) similar to human-like intelligence. Artificial intelligence has been an integral part of video games since their inception in the 1948, first seen in the game Nim.[1] AI in video games is a distinct subfield and differs from academic AI. It serves to improve the game-player experience rather than machine learning or decision making. During the golden age of arcade video games the idea of AI opponents was largely popularized in the form of graduated difficulty levels, distinct movement patterns, and in-game events dependent on the player's input. Modern games often implement existing techniques such as pathfinding and decision trees to guide the actions of NPCs. AI is often used in mechanisms which are not immediately visible to the user, such as data mining and procedural-content generation.[2] One of the most infamous examples of this NPC technology and gradual difficulty levels can be found in the game Punch-Out!!.
In general, game AI does not, as might be thought and sometimes is depicted to be the case, mean a realization of an artificial person corresponding to an NPC in the manner of the Turing test or an artificial general intelligence.
The term "game AI" is used to refer to a broad set of algorithms that also include techniques from control theory, robotics, computer graphics and computer science in general, and so video game AI may often not constitute "true AI" in that such techniques do not necessarily facilitate computer learning or other standard criteria, only constituting "automated computation" or a predetermined and limited set of responses to a predetermined and limited set of inputs.[3] [4] [5]
Many industries and corporate voices argue that game AI has come a long way in the sense that it has revolutionized the way humans interact with all forms of technology, although many expert researchers are skeptical of such claims, and particularly of the notion that such technologies fit the definition of "intelligence" standardly used in the cognitive sciences.[3] [4] [5] [6] Industry voices make the argument that AI has become more versatile in the way we use all technological devices for more than their intended purpose because the AI allows the technology to operate in multiple ways, allegedly developing their own personalities and carrying out complex instructions of the user.[7] [8]
People in the field of AI have argued that video game AI is not true intelligence, but an advertising buzzword used to describe computer programs that use simple sorting and matching algorithms to create the illusion of intelligent behavior while bestowing software with a misleading aura of scientific or technological complexity and advancement.[3] [4] [5] [9] Since game AI for NPCs is centered on appearance of intelligence and good gameplay within environment restrictions, its approach is very different from that of traditional AI.
Game playing was an area of research in AI from its inception. One of the first examples of AI is the computerized game of Nim made in 1951 and published in 1952. Despite being advanced technology in the year it was made, 20 years before Pong, the game took the form of a relatively small box and was able to regularly win games even against highly skilled players of the game. In 1951, using the Ferranti Mark 1 machine of the University of Manchester, Christopher Strachey wrote a checkers program and Dietrich Prinz wrote one for chess.[10] These were among the first computer programs ever written. Arthur Samuel's checkers program, developed in the middle 50s and early 60s, eventually achieved sufficient skill to challenge a respectable amateur.[11] Work on checkers and chess would culminate in the defeat of Garry Kasparov by IBM's Deep Blue computer in 1997.[12] The first video games developed in the 1960s and early 1970s, like Spacewar!, Pong, and Gotcha (1973), were games implemented on discrete logic and strictly based on the competition of two players, without AI.
Games that featured a single player mode with enemies started appearing in the 1970s. The first notable ones for the arcade appeared in 1974: the Taito game Speed Race (racing video game) and the Atari games Qwak (duck hunting light gun shooter) and Pursuit (fighter aircraft dogfighting simulator). Two text-based computer games, Star Trek (1971) and Hunt the Wumpus (1973), also had enemies. Enemy movement was based on stored patterns. The incorporation of microprocessors would allow more computation and random elements overlaid into movement patterns.
It was during the golden age of video arcade games that the idea of AI opponents was largely popularized, due to the success of Space Invaders (1978), which sported an increasing difficulty level, distinct movement patterns, and in-game events dependent on hash functions based on the player's input. Galaxian (1979) added more complex and varied enemy movements, including maneuvers by individual enemies who break out of formation. Pac-Man (1980) introduced AI patterns to maze games, with the added quirk of different personalities for each enemy. Karate Champ (1984) later introduced AI patterns to fighting games. First Queen (1988) was a tactical action RPG which featured characters that can be controlled by the computer's AI in following the leader.[13] The role-playing video game Dragon Quest IV (1990) introduced a "Tactics" system, where the user can adjust the AI routines of non-player characters during battle, a concept later introduced to the action role-playing game genre by Secret of Mana (1993).
Games like Madden Football, Earl Weaver Baseball and Tony La Russa Baseball all based their AI in an attempt to duplicate on the computer the coaching or managerial style of the selected celebrity. Madden, Weaver and La Russa all did extensive work with these game development teams to maximize the accuracy of the games. Later sports titles allowed users to "tune" variables in the AI to produce a player-defined managerial or coaching strategy.
The emergence of new game genres in the 1990s prompted the use of formal AI tools like finite state machines. Real-time strategy games taxed the AI with many objects, incomplete information, pathfinding problems, real-time decisions and economic planning, among other things.[14] The first games of the genre had notorious problems. Herzog Zwei (1989), for example, had almost broken pathfinding and very basic three-state state machines for unit control, and Dune II (1992) attacked the players' base in a beeline and used numerous cheats.[15] Later games in the genre exhibited more sophisticated AI.
Later games have used bottom-up AI methods, such as the emergent behaviour and evaluation of player actions in games like Creatures or Black & White. Façade (interactive story) was released in 2005 and used interactive multiple way dialogs and AI as the main aspect of game.Games have provided an environment for developing artificial intelligence with potential applications beyond gameplay. Examples include Watson, a Jeopardy!-playing computer; and the RoboCup tournament, where robots are trained to compete in soccer.[16]
Many experts complain that the "AI" in the term "game AI" overstates its worth, as game AI is not about intelligence, and shares few of the objectives of the academic field of AI. Whereas "real AI" addresses fields of machine learning, decision making based on arbitrary data input, and even the ultimate goal of strong AI that can reason, "game AI" often consists of a half-dozen rules of thumb, or heuristics, that are just enough to give a good gameplay experience. Historically, academic game-AI projects have been relatively separate from commercial products because the academic approaches tended to be simple and non-scalable. Commercial game AI has developed its own set of tools, which have been sufficient to give good performance in many cases.
Game developers' increasing awareness of academic AI and a growing interest in computer games by the academic community is causing the definition of what counts as AI in a game to become less idiosyncratic. Nevertheless, significant differences between different application domains of AI mean that game AI can still be viewed as a distinct subfield of AI. In particular, the ability to legitimately solve some AI problems in games by cheating creates an important distinction. For example, inferring the position of an unseen object from past observations can be a difficult problem when AI is applied to robotics, but in a computer game a NPC can simply look up the position in the game's scene graph. Such cheating can lead to unrealistic behavior and so is not always desirable. But its possibility serves to distinguish game AI and leads to new problems to solve, such as when and how to cheat.
The major limitation to strong AI is the inherent depth of thinking and the extreme complexity of the decision-making process. This means that although it would be then theoretically possible to make "smart" AI the problem would take considerable processing power.
Game AI/heuristic algorithms are used in a wide variety of quite disparate fields inside a game. The most obvious is in the control of any NPCs in the game, although "scripting" (decision tree) is currently the most common means of control.[17] These handwritten decision trees often result in "artificial stupidity" such as repetitive behavior, loss of immersion, or abnormal behavior in situations the developers did not plan for.[18]
Pathfinding, another common use for AI, is widely seen in real-time strategy games. Pathfinding is the method for determining how to get a NPC from one point on a map to another, taking into consideration the terrain, obstacles and possibly "fog of war".[19] [20] Commercial videogames often use fast and simple "grid-based pathfinding", wherein the terrain is mapped onto a rigid grid of uniform squares and a pathfinding algorithm such as A* or IDA* is applied to the grid.[21] [22] [23] Instead of just a rigid grid, some games use irregular polygons and assemble a navigation mesh out of the areas of the map that NPCs can walk to.[21] [24] As a third method, it is sometimes convenient for developers to manually select "waypoints" that NPCs should use to navigate; the cost is that such waypoints can create unnatural-looking movement. In addition, waypoints tend to perform worse than navigation meshes in complex environments.[25] [26] Beyond static pathfinding, navigation is a sub-field of Game AI focusing on giving NPCs the capability to navigate in a dynamic environment, finding a path to a target while avoiding collisions with other entities (other NPC, players...) or collaborating with them (group navigation). Navigation in dynamic strategy games with large numbers of units, such as Age of Empires (1997) or Civilization V (2010), often performs poorly; units often get in the way of other units.[26]
Rather than improve the Game AI to properly solve a difficult problem in the virtual environment, it is often more cost-effective to just modify the scenario to be more tractable. If pathfinding gets bogged down over a specific obstacle, a developer may just end up moving or deleting the obstacle.[27] In Half-Life (1998), the pathfinding algorithm sometimes failed to find a reasonable way for all the NPCs to evade a thrown grenade; rather than allow the NPCs to attempt to bumble out of the way and risk appearing stupid, the developers instead scripted the NPCs to crouch down and cover in place in that situation.[28]
Many contemporary video games fall under the category of action, first-person shooter, or adventure. In most of these types of games, there is some level of combat that takes place. The AI's ability to be efficient in combat is important in these genres. A common goal today is to make the AI more human or at least appear so.
One of the more positive and efficient features found in modern-day video game AI is the ability to hunt. AI originally reacted in a very black and white manner. If the player were in a specific area then the AI would react in either a complete offensive manner or be entirely defensive. In recent years, the idea of "hunting" has been introduced; in this 'hunting' state the AI will look for realistic markers, such as sounds made by the character or footprints they may have left behind.[29] These developments ultimately allow for a more complex form of play. With this feature, the player can actually consider how to approach or avoid an enemy. This is a feature that is particularly prevalent in the stealth genre.
Another development in recent game AI has been the development of "survival instinct". In-game computers can recognize different objects in an environment and determine whether it is beneficial or detrimental to its survival. Like a user, the AI can look for cover in a firefight before taking actions that would leave it otherwise vulnerable, such as reloading a weapon or throwing a grenade. There can be set markers that tell it when to react in a certain way. For example, if the AI is given a command to check its health throughout a game then further commands can be set so that it reacts a specific way at a certain percentage of health. If the health is below a certain threshold then the AI can be set to run away from the player and avoid it until another function is triggered. Another example could be if the AI notices it is out of bullets, it will find a cover object and hide behind it until it has reloaded. Actions like these make the AI seem more human. However, there is still a need for improvement in this area.
Another side-effect of combat AI occurs when two AI-controlled characters encounter each other; first popularized in the id Software game Doom, so-called 'monster infighting' can break out in certain situations. Specifically, AI agents that are programmed to respond to hostile attacks will sometimes attack each other if their cohort's attacks land too close to them. In the case of Doom, published gameplay manuals even suggest taking advantage of monster infighting in order to survive certain levels and difficulty settings.
Procedural content generation (PCG) is an AI technique to autonomously create ingame content through algorithms with minimal input from designers.[30] PCG is typically used to dynamically generate game features such as levels, NPC dialogue, and sounds. Developers input specific parameters to guide the algorithms into making content for them. PCG offers numerous advantages from both a developmental and player experience standpoint. Game studios are able to spend less money on artists and save time on production.[31] Players are given a fresh, highly replayable experience as the game generates new content each time they play. PCG allows game content to adapt in real time to the player's actions.[32]
Generative algorithms (a rudimentary form of AI) have been used for level creation for decades. The iconic 1980 dungeon crawler computer game Rogue is a foundational example. Players are tasked with descending through the increasingly difficult levels of a dungeon to retrieve the Amulet of Yendor. The dungeon levels are algorithmically generated at the start of each game. The save file is deleted every time the player dies.[33] The algorithmic dungeon generation creates unique gameplay that would not otherwise be there as the goal of retrieving the amulet is the same each time.
Opinions on total level generation as seen in games like Rogue can vary. Some developers can be skeptical of the quality of generated content and desire to create a world with a more "human" feel so they will use PCG more sparingly. Consequently, they will only use PCG to generate specific components of an otherwise handcrafted level. A notable example of this is Ubisoft's 2017 tactical shooter Tom Clancy's Ghost Recon Wildlands. Developers used a pathfinding algorithm trained with a data set of real maps to create road networks that would weave through handcrafted villages within the game world. This is an intelligent use of PCG as the AI would have a large amount of real world data to work with and roads are straightforward to create. However, the AI would likely miss nuances and subtleties if it was tasked with creating a village where people live.
As AI has become more advanced, developer goals are shifting to create massive repositories of levels from data sets. In 2023, researchers from New York University and the University of the Witwatersrand trained a large language model to generate levels in the style of the 1981 puzzle game Sokoban. They found that the model excelled at generating levels with specifically requested characteristics such as difficulty level or layout.[34] However, current models such as the one used in the study require large datasets of levels to be effective. They concluded that, while promising, the high data cost of large language models currently outweighs the benefits for this application. Continued advancements in the field will likely lead to more mainstream use in the future.
The musical score of a video game is an important expression of the emotional tone of a scene to the player. Sound effects such as the noise of a weapon hitting an enemy help indicate the effect of the player's actions. Generating these in real time creates an engaging experience for the player because the game is more responsive to their input. An example is the 2013 adventure game Proteus where an algorithm dynamically adapts the music based on the angle the player is viewing the ingame landscape from.
Recent breakthroughs in AI have resulted in the creation of advanced tools that are capable of creating music and sound based on evolving factors with minimal developer input. One such example is the MetaComposure music generator. MetaComposure is an evolutionary algorithm designed to generate original music compositions during real time gameplay to match the current mood of the environment.[35] The algorithm is able to assess the current mood of the game state through "mood tagging". Research indicates that that there is a significant positive statistical correlation regarding player rated game engagement and the dynamically generated musical compositions when they accurately match their current emotions.[36]
Game AI often amounts to pathfinding and finite state machines. Pathfinding gets the AI from point A to point B, usually in the most direct way possible. State machines permit transitioning between different behaviors. The Monte Carlo tree search method[37] provides a more engaging game experience by creating additional obstacles for the player to overcome. The MCTS consists of a tree diagram in which the AI essentially plays tic-tac-toe. Depending on the outcome, it selects a pathway yielding the next obstacle for the player. In complex video games, these trees may have more branches, provided that the player can come up with several strategies to surpass the obstacle. In this 2022 year's survey,[38] you can learn about recent applications of the MCTS algorithm in various game domains such as perfect-information combinatorial games, strategy games (including RTS), card games etc.
Academic AI may play a role within Game AI, outside the traditional concern of controlling NPC behavior. Georgios N. Yannakakis highlighted four potential application areas:
Rather than procedural generation, some researchers have used generative adversarial networks (GANs) to create new content. In 2018 researchers at Cornwall University trained a GAN on a thousand human-created levels for Doom; following training, the neural net prototype was able to design new playable levels on its own. Similarly, researchers at the University of California prototyped a GAN to generate levels for Super Mario.[39] In 2020 Nvidia displayed a GAN-created clone of Pac-Man; the GAN learned how to recreate the game by watching 50,000 (mostly bot-generated) playthroughs.[40]
In the context of artificial intelligence in video games, cheating refers to the programmer giving agents actions and access to information that would be unavailable to the player in the same situation.[41] Believing that the Atari 8-bit could not compete against a human player, Chris Crawford did not fix a bug in Eastern Front (1941) that benefited the computer-controlled Russian side.[42] Computer Gaming World in 1994 reported that "It is a well-known fact that many AIs 'cheat' (or, at least, 'fudge') in order to be able to keep up with human players".[43]
For example, if the agents want to know if the player is nearby they can either be given complex, human-like sensors (seeing, hearing, etc.), or they can cheat by simply asking the game engine for the player's position. Common variations include giving AIs higher speeds in racing games to catch up to the player or spawning them in advantageous positions in first-person shooters. The use of cheating in AI shows the limitations of the "intelligence" achievable artificially; generally speaking, in games where strategic creativity is important, humans could easily beat the AI after a minimum of trial and error if it were not for this advantage. Cheating is often implemented for performance reasons where in many cases it may be considered acceptable as long as the effect is not obvious to the player. While cheating refers only to privileges given specifically to the AI—it does not include the inhuman swiftness and precision natural to a computer—a player might call the computer's inherent advantages "cheating" if they result in the agent acting unlike a human player.[41] Sid Meier stated that he omitted multiplayer alliances in Civilization because he found that the computer was almost as good as humans in using them, which caused players to think that the computer was cheating.[44] Developers say that most game AIs are honest but they dislike players erroneously complaining about "cheating" AI. In addition, humans use tactics against computers that they would not against other people.
In the 1996 game Creatures, the user "hatches" small furry animals and teaches them how to behave. These "Norns" can talk, feed themselves, and protect themselves against vicious creatures. It was the first popular application of machine learning in an interactive simulation. Neural networks are used by the creatures to learn what to do. The game is regarded as a breakthrough in artificial life research, which aims to model the behavior of creatures interacting with their environment.[45]
In the 2001 first-person shooter the player assumes the role of the Master Chief, battling various aliens on foot or in vehicles. Enemies use cover very wisely, and employ suppressing fire and grenades. The squad situation affects the individuals, so certain enemies flee when their leader dies. Attention is paid to the little details, with enemies notably throwing back grenades or team-members responding to being bothered. The underlying "behavior tree" technology has become very popular in the games industry since Halo 2.
The 2005 psychological horror first-person shooter F.E.A.R. has player characters engage a battalion of cloned super-soldiers, robots and paranormal creatures. The AI uses a planner to generate context-sensitive behaviors, the first time in a mainstream game. This technology is still used as a reference for many studios. The Replicas are capable of utilizing the game environment to their advantage, such as overturning tables and shelves to create cover, opening doors, crashing through windows, or even noticing (and alerting the rest of their comrades to) the player's flashlight. In addition, the AI is also capable of performing flanking maneuvers, using suppressing fire, throwing grenades to flush the player out of cover, and even playing dead. Most of these actions, in particular the flanking, is the result of emergent behavior.[46] [47]
The survival horror series S.T.A.L.K.E.R. (2007–) confronts the player with man-made experiments, military soldiers, and mercenaries known as Stalkers. The various encountered enemies (if the difficulty level is set to its highest) use combat tactics and behaviors such as healing wounded allies, giving orders, out-flanking the player and using weapons with pinpoint accuracy.
The 2010 real-time strategy game gives the player control of one of three factions in a 1v1, 2v2, or 3v3 battle arena. The player must defeat their opponents by destroying all their units and bases. This is accomplished by creating units that are effective at countering opponents' units. Players can play against multiple different levels of AI difficulty ranging from very easy to Cheater 3 (insane). The AI is able to cheat at the difficulty Cheater 1 (vision), where it can see units and bases when a player in the same situation could not. Cheater 2 gives the AI extra resources, while Cheater 3 gives an extensive advantage over its opponent.[48]
Red Dead Redemption 2, released by Rockstar Games in 2018, exemplifies the advanced use of AI in modern video games. The game incorporates a highly detailed AI system that governs the behavior of NPCs and the dynamic game world. NPCs in the game display complex and varied behaviors based on a wide range of factors including their environment, player interactions, and time of day. This level of AI integration creates a rich, immersive experience where characters react to players in a realistic manner, contributing to the game's reputation as one of the most advanced open-world games ever created.[49]
The 2024 browser-based sandbox game Infinite Craft uses generative AI software, including LLaMA. When two elements are being combined, a new element is generated by the AI.[50]
Generative artificial intelligence, AI system that can response to prompts and produce text, images, and audio and video clips, arose in 2023 with systems like ChatGPT and Stable Diffusion. In video games, these systems could create the potential for game assets to be created indefinitely, bypassing typical limitations on human creations. However, there are similar concerns in other fields particularly the potential for loss of jobs normally dedicated to the creation of these assets.[51]
In January 2024, SAG-AFTRA, a United States union representing actors, signed a contract with Replica Studios that would allow Replica to capture the voicework of union actors for creating AI voice systems based on their voices for use in video games, with the contract assuring pay and rights protections. While the contract was agreed upon by a SAG-AFTRA committee, many members expressed criticism of the move, having not been told of it until it was completed and that the deal did not do enough to protect the actors.[52]
The future of AI in video games holds the potential for even more engaging and responsive experiences. Thanks to progress in machine learning and technologies like a neural network, AI characters can adapt and develop according to player interactions, leading to distinct and personalized gameplay experiences. Additionally, advancements in AI will foster more natural and intuitive interactions between players and game characters. Future games might include NPCs capable of understanding and responding to intricate player commands through conversational language, boosting the immersion and realism of the game world. The future of artificial intelligence in video games is set to bring about revolutionary changes that will significantly enhance the gaming experience. As Shahzad Ahmad highlights, AI's ability to create and modify content on the fly will allow games to present players with unique challenges and scenarios tailored to their playstyle and preferences.[53]
Recent advancements in AI for video games have led to more complex and adaptive behaviors in non-playable characters (NPCs). For instance, AI systems now utilize sophisticated techniques such as decision trees and state machines to enhance NPC interactions and realism, as discussed in "Artificial Intelligence in Games".[54] Recent advancements in AI for video games have also focused on improving dynamic and adaptive behaviors in NPCs. For example, recent research has explored the use of complex neural networks to enable NPCs to learn and adapt their behavior based on player actions, enhancing the overall gaming experience. This approach is detailed in the IEEE paper on "AI Techniques for Interactive Game Systems".[55]