Racing Games AI

This genre is mostly divided into two main groups, vehicular and specialty.

The variants of vehicular games appeared early, and the split stuck. They are differentiated by their camera perspective: the first-third person racing game and an overhead view.

The specialty racing games are mostly fad driven – they involve the hot racing style sport at the time.

One last subtype is the cart racing game, which simplifies the driving portion of the game and adds obstacles, strange trucks, and other action elements.

Common AI Elements

Track AI

Where the road meets the rubber, track AI is the system needed to keep a CPU-controlled car on racetrack (or city street) at high speed within the rules of the game. This is usually a state-based system with the different vehicle states detailing the main ways that a racer can be on track. As guidelines, most games use combination of physical and optimal lines of travel (which are either data path laid down in track editor, or calculated automatically by a technique known as “finding the path of minimum curvature”, as shown in figure below) that mimic the invisible lines of travel that humans use when they race on tracks and roads.

image

 

Traffic

Some games use traffic systems that look very realistic with lane changes, cars getting over for police vehicles, proper use of traffic lights and intersections. This is mostly FSM behaviors, with a lot of message passing to ensure that accidents don’t happen and some randomness to ensure that things don’t look repetitive.

Pedestrians

A game like GTA 2 has different types of pedestrians for example, one type can stop by and steal your car, other one can pass away while you are moving. Other systems use simple flocking behavior.

Enemy and Combat

That’s about adding AI bots with weapons or something to play against the player.

Nonplayer Characters

NPCs are folks that you deal with to give you information, clean your car, etc… These characters are most probably scripted.

Other competitive behavior

Adding more interesting playing elements to the game, like having bridge fall down while you are racing with the car.

Useful AI Techniques

Finite State Machines (FSMs)
Scripted Systems
Messaging System

The traffic and pedestrians mostly use this technique to communicate and coordinate their movement.

Genetic Algorithms

A genetic algorithm is used to tune car performance parameters until it reaches best outcome to be used. Most probably that is done offline.

Areas that need improvement

Other areas of interest than crime

Not everybody’s mother wants to see her kid running over a prostitute for her wallet.

More intelligent AI enemies
Persisting world

it’d be great if the whole world simulations are computed together.

Introduction to Game AI

  • It is a common mistake to think that the more complex the AI in a game, the better the characters will look to the player. Creating good AI is all about matching the right behaviors to the right algorithms
  • Knowing when to be complex and when to stay simple is the most difficult element of the game AI programmer’s art. The best AI programmers are those who can use a very simple technique to give the illusion of complexity
  • You need to make sure that a characters’ AI matches their purpose in the game and the attention they’ll get from the player
  • Perceptions should only be what exactly AI character need not more
  • Take care of changing behaviors (2 solders conversion example – 63)
  • For in-game AI, behaviorism is the way to go. We are not interested in the nature of reality or mind; we want characters that look right. In most cases, this means starting from human behaviors and trying to work out the easiest way to implement them in software
  • Developing Game AI Could Use:

    • Hacks: ad-hoc solutions and neat effects
    • Heuristics: rules of thumb that only work in most, but not all, case
    • Algorithms (the "proper" stuff)
  • Developers rarely build a great new algorithm and then ask themselves, "So what can I do with this?" Instead, you start with a design for a character and apply the most relevant tool to get the result
  • Some if the AI doesn’t require an algorithm or a technique it only requires a simple bit of code that performs an animation at the right point
  • Human beings use heuristics all the time. We don’t try to work out all the consequences of our actions. Instead, we rely on general principles that we’ve found to work in the past (or that we have been brainwashed with, equally)
  • Problems with heuristics all the time: range units when shooting peons (65)
  • Common Heuristics:

    • Most constrained

      • For example, a group of characters come across an ambush. One of the ambushers is wearing phased force-field armor. Only the new, and rare, laser rifle can penetrate it. One character has this rifle. When they select who to attack, the most constrained heuristic comes into play: it is rare to be able to attack this enemy, so that is the action that should be taken
    • Do the most difficult first

      • For example, an army has two squads with empty slots. The computer schedules the creation of five Orc warriors and a huge Stone Troll. It wants to end up with balanced squads. How should it assign the units to squads? The Stone Troll is the hardest to assign, so it should be done first. If the Orcs were assigned first, they would be balanced between the two squads, leaving room for half a Troll in each squad, but nowhere for the Troll to go
    • Try the most promising first:
  • Hacks and heuristics will get you a long way, but relying on them solely means you’ll have to constantly reinvent the wheel. General bits of AI, such as movement, decision making, and tactical thinking, all benefit from tried and tested methods that can be endlessly reused (use algorithms here)
  • Just remember that for every situation where a complex algorithm is the best way to go, there are likely to be at least five where a simpler hack or heuristic will get the job done
  • One of the major reasons that new AI techniques don’t achieve widespread use is their processing time or memory requirements
  • Processor Issues in Game AI:

    • Complex AI that does work in games needs to be split into bite-size components that can be distributed over multiple frames. The chapter on resource management shows how to accomplish this. Applying these techniques to any AI algorithm can bring it into the realm of usability
    • SIMD:

      • Most modern CPUs have dedicated SIMD processing. SIMD (single instruction multiple data) is a parallel programming technique where a single program is applied to several items of data at the same time
      • Steering algorithms benefit from this feature
    • Multi-Core Processing and Hyper-Threading:

      • Modern processors have several execution paths active at the same time. Code is passed into the processor, dividing into several pipelines which execute in parallel. The results from each pipeline are then recombined into the final result of the original code. When the result of one pipeline depends on the result of another, this can involve backtracking and repeating a set of instructions. There is a set of algorithms on the processor that works out how and where to split the code and predicts the likely outcome of certain dependent operations; this is called branch prediction. This design of processor is called super-scalar
      • Normal threading is the process of allowing different bits of code to process at the same time. Since in a serial computer this is not possible, it is simulated by rapidly switching backward and forward between different parts of the code.
      • A multi-core processor effectively has multiple separate processing systems (each may be super-scalar in addition). Different threads can be assigned to different processor cores
    • Virtual Functions/ Indirection

      • Virtual functions add flexibility to the code but it’s very costly
  • Memory Concerns:

    • Cache:

      • If processors had to rely on the main RAM, they’d be constantly stalled waiting for data
      • All modern processors use at least one level of cache: a copy of the RAM held in the processor that can be very quickly manipulated. Cache is typically fetched in pages; a whole section of main memory is streamed to the processor. It can then be manipulated at will. When the processor has done its work, the cached memory is sent back to the main memory.
      • In my experience (author), dramatic speed ups can be achieved by making sure that all the data needed for one algorithm is kept in the same place
  • PC Constraints:

    • If the AI gets less time to work, how should it respond? It can try to perform less work. This is effectively the same as having more stupid AI and can affect the difficulty level of the game. It is probably not acceptable to your quality assurance (QA) team or publisher to have your game be dramatically easier on lower specification machines
  • Develop your AI code to be an AI Engine that could be reused
  • Try to develop tools that can generate you AI Code ( as in steering behaviors)
  • AI-Implant’s Maya module, for example, exposes complex Boolean conditions, and state machines, through graphical controls

Introduction to AI

  • What is AI?
  • Many of trivial problems ( playing Connect 4) were solved by computers but there are many things that computers aren’t good at which we find trivial: recognizing familiar faces, speaking our own language, deciding what to do next, and being creative. These are the domain of AI: trying to work out what kinds of algorithms are needed to display these properties
  • In academia, some AI researchers are motivated by philosophy: understanding the nature of thought and the nature of intelligence and building software to model how thinking might work. Some are motivated by psychology: understanding the mechanics of the human brain and mental processes. Others are motivated by engineering: building algorithms to perform human-like task. Where game developers concerns with the last motivation
  • History of Academic AI:

    • The Early Days:

      • In the early days (before computers) some questions appeared (in philosophy of mind) as:

        • What produces thought?
        • Could you give life to an inanimate object?
        • What’s the difference between cadavers (جثة) and human it previously was?
      • Pioneers of the field these days were: Alan Turing (father of AI), von-Neumann, Shannon
    • The Symbolic Era:

      • From 1950s till 1980s main thrust in AI research was in "symbolic" systems
      • A symbolic system: is one in which the algorithm is divided into two components (as Expert Systems):

        • Set of knowledge: represented as symbols such as words, numbers, sentences, or pictures
        • Reasoning algorithm: that manipulates those symbols to create new combinations of symbols that hopefully represent problem solutions or new knowledge
      • Other symbolic approaches in games: blackboard architecture, pathfinding, decision trees, state machines, steering algorithms
      • Common disadvantage of symbolic systems: when solving a problem the more knowledge you have, the less work you need to do in reasoning
      • The more knowledge you have, the less searching for an answer you need; the more search you can do (i.e., the faster you can search), the less knowledge you need
    • The Natural Era:

      • From 1980s to 1990s frustration symbolic approaches come into two categories:

        • From engineering point:

          • early success on simple problems didn’t seem to scale to more difficult problems
        • From philosophical point:

          • Symbolic approaches are not biologically plausible (i.e. You can’t understand how a human being plans a route by using a symbolic route planning algorithm)
          • The effect was a move toward natural computing: techniques inspired by biology or other natural systems (like ANN, GA and simulated annealing)
  • The no-free-lunch theorem and subsequent work has shown that, over all problems, no single approach is better than any other
  • The narrower the problem domain you focus on, the easier it will be for the algorithm to shine. Which, in a roundabout way, brings us back to the golden rule of AI: search (trying possible solutions) is the other side of the coin to knowledge (knowledge about the problem is equivalent to narrowing the number of problems your approach is applicable to)
  • Game AI:
  • Till 1990 all computer-controlled characters used FSM
  • In 1997 the new technique included was ability to see colleagues and notify them when killed
  • In mid-1990s RTS games (Warcraft II) was the first time popular game having robust pathfinding implementation
  • The AI in most modern games addresses three basic needs:

    • The ability to move characters,
    • The ability to make decisions about where to move
    • The ability to think tactically or strategically
  • Model of Game AI:

  • Movement:

    • Movement refers to algorithms that turn decisions into some kind of motion
    • Examples (49):

      • Super Mario example when attacking enemies with bullets
      • Guard that want to reach alarm example
  • Decision Making:

    • Involves character working out what to do next
    • Examples: take the decision to attack, defend, patrol…
  • Strategy:

    • To coordinate whole team you need a strategic AI
    • In the context of this book, strategy refers to an overall approach used by a group of characters
    • In this category are AI algorithms that don’t control just one character, but influence the behavior of a whole set of characters
    • Example: surrounding a player in FPS Game
  • Infrastructure:

    • These are Information Gatherer (perception) and execution management issues
  • Agent-Based-AI:

    • agent-based AI is about producing autonomous characters that take in information from the game data, determine what actions to take based on the information, and carry out those actions
  • Techniques in this book are implemented into 3 categories: Algorithms, Data Structures, Game Infrastructure
  • Key elements to know when implementing algorithms:

  • Know the problem the algorithm want to solve
  • A general description of the working mechanism of the algorithm including diagrams
  • A pseudo-code presentation of the algorithm
  • Indication to the data structure used in the algorithm
  • Particular implementation node
  • Analysis of the algorithm performance: execution speed, memory footprint and scalability
  • Weaknesses in the approach

Common AI Development Concerns

  • Designing Issues:
  • Data-Driven AI System:

    • Data-Driven primitive Levels: animations, behaviors and strategies
    • Designers could build animations themselves
    • Data-Driven system doesn’t improve AI engine but supplies its degree of organization
    • Be sure that you are not driving data for areas that are better with code solutions
    • Rule of Thumb Here:

      • Create reusable, simple primitives that allow users to build more complex objects
  • The One-Track-Mind Syndrome:

    • The problem is that AI programmer apply one AI technique on all AI problems in the game
    • Most error here: "State Machines are all you need!"
    • Every AI technique is suitable for particular input and under particular game conditions
    • You can use FSM for basic layout of the game, FuSM for main short decision layer, a simpler planner to run your pathfinding and long term decision making layer , and a scripting system to drive your animations
  • Load of Details (LOD):

    • LOD might entail (handle AI details between various levels):

      • Off-screen and faraway: characters are completely non-exist for human player
      • Off-screen and close: characters can’t be seen but the human player might still hear them
      • Very far-off: character is visible a pixel or two
      • Far-off: characters are visible as solid colors but no real details yet (you can differentiate between truck and car)
      • Medium: this distance would be your true area of sight
      • Close: anything closer than medium
      • Interaction: the character is interacting directly with the player someway
    • If characters are in off-screen faraway then we can forgot about real obstacle avoidance
    • There could be LOD systems for your engine that determines the current level of detail
    • Areas under human sight could be updated thirsty times per second and far-away area could be 3/2 times
    • Scientist example (page 647)
    • LOD in RTS Game (page 648)

      • Buildings and mining will be statistically determined (without moving peons)
  • Helper AI:

    • Some areas still benefit from AI techniques as:

      • User Interface
      • AI Advisors
      • Automated Testing Systems

        • Testing Areas:

          • Limits Testing: testing values around of system’s capabilities
          • Random Testing: uses completely random input to the system
          • Smart Testing: real-game play techniques are applied
  • General AI Thinking:

    • Determine the goals of your AI controller
    • Brainstorm about how to solve these problems
    • Talk with AI staff about each proposed solution you have got
    • Coining the solutions in a small prototype (like AIsteroids)

      • Here you have 80% of the solution the rest 20% will be discovered in the code
    • Think in fuzzy with your colleagues
  • Always differentiate between good programming and programming for sake of goodness
  • Consider novelty in your AI Engines (like MOHAA when you throw grenade on them they pick up it and throw it back)

Stupid AI could do: Rules Enemy, Bad Pathfinding, Non-Contextual Enemy, Oblivious Enemy

Steering Behaviors (SB)

  • Sometimes in games, creatures not require intelligence and you want them to behave in natural way. Here you are looking for steering behavior
  • Steering techniques tend to be low level and concerned only with which direction a character should steer (يتوجه)
  • Characters that use SB are called autonomous agents meaning that they are typically set in motion and don’t require additional help from programmer or scripted elements in order to perform within the game world
  • SB are only concerned with movement so autonomous here appears in that area only
  • Reynolds call his original creatures boids and he set up 3 simple rules that can model complex locking behavior
  • These rules are:

    • Alignment: your head should always be in the same direction with the rest of the group
    • Cohesion: force of keeping the group together; tight and focused
    • Separation:

      • Could be seen as opposite of cohesion. This gives group members "personal space bubble" so they don’t crowd each other too much and every individual has room operate
  • Emergent behaviors is something like holy grail of game programming, where combination of simple local rules lead to very complex global behaviors
  • Because every boid is behaving in local stimulus (حافز), by time this can lead to chaotic state. But negative feedback from the main controller brings the overall group back into more ordered state
  • Prediction of the next behavior is 1 minute is impossible!
  • When to combine behaviors take in account following:

    • The combination could lead to meaningless behavior
    • Some behaviors could cancel other behaviors (antagonistic behaviors)
  • Most popular combining behaviors are:

    • Simple Weighted:

      • Sum up all individuals steering forces
      • Apply a weight multiple to each force vector before adding it into running sum

        • This able you to control the effect of the steering behavior on the overall system
      • The problem here is: how to find the suitable weights that lead to balance
      • Expensive o n CPU time because this calculation is calculated every game loop
    • Prioritized Sum:

      • Behaviors are stored in prioritized list
      • When we update each one we sum the result into accumulated steering force
      • The magnitude of the incoming force is subtracted from the maximal force (set by the programmer)

        • Here when we pass the maximal amount we truncate sum and stop updating
      • Advantages:

        • Here we only update exactly the number of allowed behaviors
        • We set up the behaviors here from important (pure) to lower importance
    • Prioritized
      Dither:

      • All the given behaviors are stored in priority order
      • Each behaviors is assigned a probability value
      • When the manages goes to update behavior collection, it rolls the dice and tests against the behavior’s probability value

        • If it passes, the behavior is updated
        • If the behaviors does something (sends back a steering vector), then its ignored
      • And after that the behavior in the collection is tested
      • Advantages:

        • For accurate results a very probability will be assigned to it
  • Pros of Steering Behaviors:

    • Ease of implementation
    • Predictable
    • Inexpensive to execute on the CPU
  • Cons of Steering Behaviors:

    • Combinatorial complexity
    • Short sightedness (بعد النظر)
    • Deadlocking problems
    • Large tuning requirements
    • They look robotic is misused
    • Not great at modeling idle states
  • Extension to Paradigm:

    • Layered Steering:

      • Each steering behaviors has other system of steering
      • The layers tend to be general then specific
    • Learning Behaviors:

      • Use learning behaviors in:

        • Altering weights and probabilities of behaviors
        • Learn the agent when to use the specific behavior
      • GA was used successfully in that area
      • There are 2 papers useful in that topic:

        • Competition, Coevolution and the Game of Tag
        • Evolution of Corridor Following Behavior in a Noisy World
    • Common Steering Behaviors:

      • Approach
      • Pursuit
      • Evade
      • Arrive
      • Avoid
      • Wander
      • Purist With Offset
      • Patrol
      • Match Target Speed
      • Hide
      • Interpose
      • Orbit
      • Flow Field Following
      • Queuing
    • Data-Driven Steering Behaviors:

      • This will be grateful for game designers because with one mouse click a new behaviors will be added to your system
  • Optimizations:

    • Load Balance:

      • Update certain agents in certain loops

        • Take care here if you are using flocking groups so, all the group should be updated
      • Update agent under condition

        • Like if we have a leader and a following group, the group will not be updated only if the leader changes his state
      • Using less costly combination methods for certain percentage of time

        • In order to partially bring down CPU cost without losing all the accuracy in your game
    • Priority/Weight Adjustments:

      • Assign the highest propriety to the least expensive behaviors you have (also does work well)
      • The less cost behaviors could be checked first
  • Design Considerations:

    • Genre: preferred to be used with tactical types
    • Agent Reactivity: very reactive and able to be adjusted
    • System Realism: suitable for realism
    • Genre:

      • Genres that require emergent, organic movement from AI controlled agents
      • Genres that have groups and flocking behaviors exist
    • Development Limitations:

      • Make sure of the produced emergent behaviors by system
  • Note:

Always check that you problem is really simplified break down to the simplest behaviors you have

Distributed AI

  • The first rule of ALL game programming: Keep it Simple, Stupid
  • The purpose of the distributed method is to simplify overall AI creation and maintenance, by spreading out AI tasks into modular, as well layered systems working with each other
  • Several AI Engine Layers:

    • Perception/Event Layer: Filters incoming sense data for relevance and various other factors
    • Behavior Layer: Determines the specifics of how to perform a given action
    • Animation Layer: Determines which animation to play to fit game state
    • Motion Layer: Handling aspects like pathfinding, collisions and avoidance
    • Short-Term Decision Making Layer:

      • The narrow-view intelligence for the AI entity, primarily concerned with just the entity
    • Long-Term Decision Making Layer:

      • Handles wire-view intelligence issues, like planning or team based considerations
    • Location-Based Information Layer:

      • Includes information transmitted to the entity from influence maps, smart terrains
  • Perception and Events Layer:
  • Why to create central perception System:

    • Prevent game values being calculated several times within single game loop
    • Supports debugging and tracking the system
  • Perception systems works fine with message based systems

  • Behavior Layer:

    • This layer is a candidate for data-driven systems
    • The more content designers put in this layer, the more virtually "calculation-free" personality and intelligent your characters with exhibit
    • scripts shouldn’t contain math; it contains sense-style that characters’ behaviors requires to seem realistic
  • Animation Layer:
  • Choosing right animation to play is not a trivial task
  • Scripted systems here saves calculation time

  • Motion Layer

    • Basic movement, pathfinding and obstacle avoidance algorithms stated here
    • You can adjust this layer for every character you have:

      • Smart Character: know when to use teleport in the suitable time
      • Weak Character: let the obstacle pass first and then he passes
      • Strong Character: move the obstacle away the way!
  • Short-Term Decision Making:

    • This layer is relative to character either because of its attributes, its current perceptions or its last experience
    • A character might be almost dead so his goal now is to get health power up or run away
    • A character’s weapon is empty so, its goal now is to hide and reload the weapon and appear again
    • Most of the games develop the ST layer in state-based manner
  • Long-Term Decision Making:

    • LT are likely to be outside any one unit
    • In RTS games this layer usually developed under FuSM
    • This layer contain the planning algorithms and high strategic decisions
    • In RTS games, AI opponent can use full fuzzy system logic to "guess best" action to take under current info
  • Location-Based Information Layer:

  • This layer is like blackboard architecture for creating AI Engines
  • Uses of LBI Systems:

    • LBI can help LT in determining weak defensive area
    • In knowing military intersect area in the map
    • Discovering valuable suitable ambush areas in the map
    • Pathfinding algorithm use LBI to avoid kill zones or bad designs in the map
  • Triggers could be added to this layer to help to simplify other layer (like be careful enemy is near you!)
  • Quote: "reaching for insect intelligence first, and then go on"

Other Techniques of Note

  • Artificial Life (ALife):

    • ALife is about searching to find "governing principles" to the life
    • Rules that can’t be broken in the future and can be used without fail to understand aspects of nature and predict outcomes based on hard equations
    • Newton has proved Descartes’ clockwork universe: in which God set up everything like a clock. God’s only contribution to the universe was to set everything in motion, and from there the laws of science took hold and have governed every sequence of events since that time
    • Chances to find any governing principles are so limited so we could construct our own "life simulations" so and doing so find out more about how life in general operates
    • ALife is a field of studies that hopes to understand natural life better by attempting to recreate biological phenomena from virtual computer environments
    • One main tenet of alife is that life is simply an emergent property of nature following some very simple rules over and over again
    • An emergent property refers to a trait or behavior exhibited by a creature that reaches beyond the capabilities of its constituent parts (Question)
  • ALife Usage in Games:
    ALife was used in Black & White and Creature Games
  • Artificial Life Disciplines:

    • Cellular (خلوي) Automata (CA):

      • CAs are group of algorithms that show a stunning amount of complex behavior with the very simplest of rules
      • Cells example (page 522)
      • When to use CAs:

        • Simulating mold growth
        • City building
    • Self-Organizing Behavior and Flocking (التدفق):

      • Large group of creatures can organize their movements within groups quickly, easily with what appears to be unified mind
      • Investigations here were to produce algorithms that allow us to replicate this kind of behavior using simple rules
      • The most famous research in this area was Boid research, mainly by using few key concepts you can achieve a remarkable simulation of many types of flocking movement

        • These simple concepts are:

          • Separation to avoid crowding
          • Alignment to the average group heading
          • Positional cohesion of the group
    • Genetic Algorithms: can
      model the idea of evolution
  • Pros of ALife:

    • Emergent behavior:

      • ALife is one of the best ways we currently have of creating emergent situations
      • Emergent behavior will appears in games where the AI opponent with simple actions that can be combined in different ways leading to a wealth of different final behaviors
    • Behavior of reuse:

      • ALife forces the developers to build games out of building blocks, assembling gameplay until it can be expressed in simple rules. In fact many ALife games are simple to build
  • Cons of ALife:

    • Emergent behavior: the emergence outcome could be not entertained!
    • Tuning issues: little changes in game parameters could destroys the emergence behavior completely
  • Some games uses ALife with creatures to create actual ecologies within the world instead of random spawn points

  • Planning Algorithms:
  • Planning is, deciding upon a course of action before acting
  • Formulas that planning algorithms follow:

    • Break the abilities of the AI into distinct operators
    • Designing your AI character, and game environment in states manner
    • Construct either:   

      • Tree: that shows transition connections between states and lists the transition operators
      • Rules embedded in each state: that details which operators are available
    • Then, AI forms a plan within a local working memory by applying these transitions on a copy of its current state, testing for the best action to the behavior it wants
  • Here you know what your start state and end state, planning lists the best string of operations to get you to the end
  • Example: the plan to a pathfinding (page 525)
  • Because planning is costly we’ve used hardcoded plans (scripts, FSM…) which allow us usually do right behaviors
  • Usage in Games:

    • Used in pathfinding algorithms
    • Used in attacked/defense in RTS games: when AI sees that the enemy creates unit from type X so it (AI Opponent) searches in its technology tree for the path to get the anti-unit to that unit and start creating it
    • In FPS games: a human entering a room for a power up and AI should kills him (527)

      • This is a serious AI player that mimic humans
    • Make AI anticipate ambushes
  • Some Planning Techniques:

    • A*
    • Means And Analysis (MEA):

      • combines forward and backward searching of tree & tries to minimize unnecessary search
    • Patch Recalculation:

      • Get a broken plan then pass this plan to a function that patches (by come up with planning step) the hole is this plan
      • This method is used when an event happens results in broken plan & also your plan should be long
      • This planning technique provides a way of keeping plans up to date without having to start from zero
    • Minimax:

      • Considers your opponent is going to be working against you every chance he can get
      • Reinforcement Learning cold be combined with that technique resulting in more challenging behaviors
  • Pros of Planning:

    • Planning algorithms provide much intelligent looking behavior
    • Planning is generic and could be developed once and used in many contexts
    • Planning can be implemented hierarchically (See example page 528)
  • Cons of Planning:

    • Planning is computationally complex
    • Humans player are unpredictable so you should be careful in the plan depth

      • Make balance between speed and flexibility of plan Vs. having short range plans to avoid gaffs
      • For long plans you could use patching recalculation to adapt them on the situation
    • Planning can make the AI seems sluggish (بطيء) or un-reactive if plans are too monolithic (متشابهة) or take too long to adapt to new situations
  • Areas for Exploitations (الإستغلال)within Games:

    • Planning is used when creating strategic AI systems that require many steps to achieve goals
    • Most candidate for planning is RTS Games
    • Used in FPS, could be used to set ambushes and traps
    • Used in race-car to pass critical corners quickly rather than just "seed up" the car
    • Used in Frightening Games in choosing damaging combinations (box attack then kick head …)
    • Used in football games to confuse human player or waste time till much ends (to win of course!)
  • Production Systems:
  • Production systems are also referred as expert systems
  • Production systems are rule-based systems that strive to encompasses expert knowledge within a specific area
  • Example of this is hardcoded conditional if-then branches within your AI engine to make decisions
  • In 1969, Alan Newell and Herbert Simon released the theory of the General Problem Solver (GPS) based on how the human mind operates (means-and-analysis)
  • GPS tends to store expert knowledge about highly specific problem
  • Production system is separated into four parts:

    • Global database
    • Production rules: serve as actual if-then statements
    • Rules/Situation Matcher: decides which operator to use next upon the database get closer to your goal
    • Conflict resolution function (for use with rule collisions)
  • Production system use:
    forward chaining inference and backward chaining inference
  • Production systems could be used as planning systems or learning devices
  • John Laird and his team successfully interfaced Soar with both Decent 3 and Quack 2 and created competent, non-scripted opponents for each game. Using a system more than 700 rules, they created quack bot that can move in game levels; use all weapons and level elements (such as bonus pad, teleporters). Can anticipate human actions and also creating custom routes of travel to maximize amount of power ups it could collect and perform intelligent ambushes and human behaviors
  • Pros of production systems:

    • Generic Algorithm; production systems are data independent
    • Research; tons of researches have been on production systems fast matching algorithms like RETE and TRETE (stateless of RETE) they speed up matching
    • Goal directed
    • Highly reactive (with as good set of rules)
  • Cons of Production Systems:

    • Computationally expensive; especially with games having large rule set or non-arbitrary match collision resolution
  • Areas for Exploitation within Games:

    • Written in a data-driven way; so new rules perceptions could be added easily to the game world
    • So it’ll offer highly extendible and reusable systems
  • Decision Trees:
  • Here you mainly convert each if-else statement in a node within a tree
  • The root of the tree could be the question that the tree answers
  • There are 2 types of decision trees used:

    • Classification Trees: trying to determine "type" of player the human is behaving like (aggressive, defensive)
    • Regression Trees: adjusting AI behaviors (too hard, too easy) on human playing difficulty
  • The differences between using Decision Trees (DT) and NN are:

    • NNs are black box; weights couldn’t be changed or understood whereas DT are fully understandable, descriptive and easy to change
    • DT can only comprehend hard comparisons within a binary outcome whereas NN could handle noisy data or data within gaps and strange jumps in behavior
    • Output from DT always discrete values where output from a NN can be continues value
    • DT consider single variable at a time (called monothetic) whereas NN can handle multiple variables at a time (called polythetic) So when a combination of variables used to take one decision NN are more suitable here
    • NN are much accurate than DT. Relative error in traditional data set might be 10 times or more for DT over back-propagated NN

  • Pros of Decision Trees:

    • Easy to use, understand and change
    • Many algorithms exist for designing, training, debugging, tuning and optimizing decision trees
    • DT are suitable to be used in games and make more sense than NN
    • Complexity could be resolved using hierarchal decision tree
  • Cons of Decision Trees:

    • BDTs deals only with distinct states, don’t scale well and difficult to maintain or extend once they got a certain level of complexity
    • The size of your tree is direct inverse correlation between accuracy and size

      • So if you need specific and accurate outputs go to NN or other techniques
    • Non-binary trees are hard in constructing, altering and using
  • Areas of Exploitation within Games:

    • Using classification to perform simple player modeling
    • Data driving the trees and provide adding and removing nodes from trees (used in Black & White to record high level thinking that you avatar did about his environment given his experiences and training, about which actions to take at any given time)
    • Can be simply used by game level designers
  • Fuzzy Logic:
  • Fuzzy logic is a far more advanced system, complete with its own logical proof and methods
  • The Sony PlamTop is reported to use decision trees based on fuzzy logic to classify handwritten Kanji characters
  • Fuzzy logic could be used in "Fuzzy-Logical Production System" which follows all the rules of regular production systems, but performs all its inference using fuzzy logic instead
  • The general inference process is three (optionally four) – Page 537:

    • During Fuzzification: membership function is applied to the input to determine the degree of membership
    • Unser inference: usually MIN or PRODCUT are used in inferred rules
    • Under Composition: usually MAX or SUM are used
    • Defuzzification: converting fuzzy output to crisp number (common methods: Centroid and Maximum)
  • Fuzzy systems are used in pattern recognition, financial systems and heavy data analysis
  • They are not implemented in systems that require a heavy amount of realism
  • Fuzzy set membership discuss to what extend something already is true
  • Pros: extends Boolean logic in a new manner
  • Cons: Computationally expensive (Combs method to combat this problem)
  • Areas of Exploitation within Games:

  • Dealing with unknown or partially known information (like player modeling)
  • In online games, it could used as "helper AI" that plays a game for you temporarily (if you are afk for a while)

Neural Networks (NN)

  • NN are sometimes called Artificial Neural Nets (ANN). It tries to behave like the human brain
  • Neural Nets in Nature:

    • Animals’ brains are large cluster of interconnected nerve cells called neurons
    • Human brain is composed of about 100 billion neurons. Each neuron has a number of connections to other neurons both coming in and going out (humans have about 10000 connections per neuron)
    • Connections coming in called dendrites and going out are called axons
    • Actually neurons are not connected but rather the dendrites of a neuron come very close to axons of other (0.01 micon) and spaces between them is called synaptic gap or synapse
    • Working Mechanism of NN:

      • A charge is transmitted to the neuron
      • If this charge gets too large (above certain threshold)it fires the collected energy down its axons that is called an action potential
    • When neuron fires so much in specific situation it learns that it should fire up in this situation. Also opposite is applicable
    • Introducing inhabitation (كبت) & exhibition (إعانة)
  • What we want to emulate:

    • Take input, recognizing patterns     within the input and take decision based on these patterns
  • Human brain works 100Hz a fraction of the speed of modern computers
  • Because of the symbolic way that our brains store knowledge we can employ many levels of efficiency that allow us to parallel processing
  • Notes:

    • NN tends to classify and recognize patterns within input
    • NN without direction restriction are called recurrent network

      • In this system info can go from input to output and back again allowing feedback within the system
      • To facilitate this feature a number of state variables associated with neurons
    • Most AI programmers use FF (Feed Forward) systems because they are easier to understand
    • Recurrent system can be simulated using multiple FF systems
    • Sparsely connected systems can be simulated using fully connected where weight of connection is small
    • NN could be used as a basis for player modeling (so AI system can predict player actions)
    • NN could learn in live gameplay so it could learn adaptive techniques
  • Artificial Neural Nets Overview:
  • Neuron Structure:

    • Value associated with a neuron is the sum of all input values multiplied by their connection weights added to the neurons bias value
    • Bias refers to the inhibitory or exhibitory effect the neuron has within the network
    • Axon of neuron is represented by its output value (optionally filtered through activation function)
  • Neural Network Structure:

    • In the following diagram:

      • circles are neurons (nodes)
      • Lines between nodes are connections between them
      • Neurons in the 1st column represent input layer
      • Neurons in the 2nd column represent hidden layer

        • While this layer gets bigger, more patterns are recognized
        • Perceptron: occurs when you map input to output layer without hidden layer

          • These are used for some linear pattern recognition
      • Neurons in the 3rd column represent output layer

        • Contains the actual categories that the network is trying to impose on its inputs
      • Each connection has:

        • an associated value
          (weight)&
          direction

          • Weight biologically equal to the strength of the connection between 2 nodes
    • Nodes Connectivity:

      • Fully connected: where each node is connected to other nodes next layer
      • Sparsely connected: opposite of fully connected
  • NN in World Business:

    • US Postal systems: which uses heaving trained NN for handwriting recognition when reading address on mail
    • Predict weather, Judging credit card fraud, voice recognition, diagnosis of diseases, filter internet sites
    • In Games:

      • Used in pattern recognition and prediction:

        • Patterns are recognized to take decisions
        • Enemy patterns are recognized and stored to used in prediction
      • Example:

        • NN can be trained to become a black box for potentially expensive operations like animation selection (what animation the basketball player should perform right now given state of game, his skill, surrounding player, the spread point, difficulty level of game)
  • Step of adding a NN to your Game:

    • Set up your network
    • Train it using specially prepared data
    • Use it in live-games
  • How Set Up Your Network:

    • Structure:

      • Refers to both type (FF and recurrent ) and organization (how many nodes/how many hidden layers)
      • Input Layer:

        • Number of variables the NN can categorize or pattern match on determines the # of input nodes
        • abstract variables that represent combinations or calculations based on simpler variables are better suited to NNs (i.e. in AIsteroid we could have danger abstract variable calculated from others)
        • The fewer you can get away with, the better
        • The more nodes you include in a NN, the larger search space becomes that the NN is slogging through to arrive at a suitable solution
      • Hidden Layer:

        • There are no real guidelines about how many hidden layer (but 1 layer in recommended)
        • A common practice is to use medium # of hidden nodes (two times the # of input nodes) and then go few up or down and compare the performance until you see it tapering off
        • Some Criteria in determining # of hidden nodes:

          • # of training cases & complexity of function being solved,
          • amount of noise (variance) in outputs
        • NN in capable to encapsulate a nonlinear function that maps the inputs to the outputs
        • Perceptron is capable to find linear correlations between input and output so, adding a nonlinear activation function (AF) on the nodes to give it nonlinearity
        • Any nonlinear function is suitable except polynomials. For backpropagation learning the AF must be differentiable and it helps if the function is bounded
      • Output Layer:

        • The # of output nodes = #of outputs that are required from NN
        • In continuous output, the level of the neuron’s activation (NA)would tell you how much to do

          • NN could tell you to Turn Right or Turn Left and NA would tell you how much to turn
        • Activation is calculated using an activation function
        • Common activation functions:

          • Step function, hyperbolic tangent, logistic sigmoid functions, Gaussian function
    • Learning Mechanism:

      • After setting the NN you should determine how you want to train it
      • Common Types of NN Learning:

        • Supervised:

          • involves having training data that consists of input output pairs
          • Backpropagation Method:

            • Here you feed input into NN, and adjust weights of network if there is discrepancy between output from NN and the expected output given in training data
            • Training continuous until a certain level of accuracy achieved
            • It’s called backpropagation because the way you adjust the network parameters is from back to front
          • Reinforcement Learning:

            • Here outputs is not given to the algorithm but the network is rewarded (or its behavior reinforced)when it performs well
            • Some implementations also punish when system performs poorly!
        • Unsupervised Learning:

          • Involves statistically looking at the output and adjusting the weights accordingly
          • Perturbation Learning:

            • Similar to simulated annealing
            • Here you test your NN then adjust some values a small amount and try it again. If you get better performance, you keep going by repeating the process; otherwise you get back to your last network settings
          • Using Genetic Algorithms to adjust weight values of NN
    • Creating Training Data:

      • This task is more important for supervised learning
      • Recording:

        • Record a human performing the same tasks you are trying to teach to your system then use it
        • Preceding technique is great because it could be used to build human-level performance system
      • Using Programs:

        • Another way, is to write program that generate reasonable input scenarios and have a human say which output should come up
        • This method is time consuming for person involved
        • This method is suitable for discrete outputs and for small output values
        • Enhancement:

          • you could
            generate random input/output pair and validate then storing them if winners
        • It’s recommended to have at least 10 times as many training cases as input units (10N)
  • NN is about determining the pattern between input and output whereas GA is optimizing a set of numbers to maximize some fitness function
  • NN are used primary are:

    • regression: finding a function that fits all the data points within some tolerance
    • classification: classifying given input into output
  • Functions types: under-fitting, fit and over-fitting
  • Example of using NN:

    • Feature that need AI: Building a NN to help your AI enemy evading bullets shot by the player by sidestepping out of the way
    • Inputs to NN: enemy face direction, enemy position and player position
    • Output
      of NN: NN will determine a movement vector for the enemy

image

  • Most of the success for a NN is based on capturing good data
  • Optimizations:

    • Optimizing NNs generally involves optimizing the training phase to get better because most of NN are used offline
    • Lessing the training time to construct a viable network design and creating high effective, relevant training data
    • Some points to consider:

      • If your networks seems to be stuck in local maxima, where error becomes stable but still higher than threshold

        • You might be using too few training data
        • Your hidden layer might be too small
      • If your training seems unstable (meaning that the error seems to jump all over the place)

        • You might have too many hidden layers
        • Network has essentially been given too much room to experiment within
      • Over-fitting:

        • You might have few training set
        • Having too many training iterations with the data
      • Under-fitting:

        • You might have large amount of very noisy training sets
        • You don’t train for enough iterations
      • If errors seems to be oscillating between values

        • You may be using too large a learning rate or momentum might be too high
        • Gradient descent in a greedy algorithm and will perform poorly if the step size is too high

          • Here you can use Newton’s method (but it’s more costly)
  • Pros of Neural Net-Based Systems:

    • NN are great way to find abstract relationships between input conditions
    • NN can extract very complex math functions solutions

      • Here you saving your CPU time
      • It has been math. proven that an NN with at least one hidden layer and nonlinear activation function can accurately depict almost any finite dimensional vector function on a compact state set
    • Can derive meaning from nonlinear to imprecise data
    • Training takes a fraction of CPU time as trial and error methods take
    • Humans make sense of them (can understand NN)
  • Cons of Neural Net-Based Systems:

    • Determining how to train an NN is usually cost

      • Here mainly we’ve shifted the problem from how to solve the problem to how teach NN solving them
    • NN could learn bad relationships so the output is not as expected

      • This problem could be caused when you use arbitrary, numerous or bogus input to the network
    • NN is a math. Black box and thud hard or even impossible to debug
    • All input fields must be numeric
    • NNs are difficult to implement

      • Due to high number of factors that must be determined without guidelines or rules
      • These factors are: network structure, input/output choices, activation function, learning rate, training data issues, weights initialization
    • Over-fitting is very common
    • NNs sometimes suffer from phenomena called catastrophic unlearning

      • This occurs when a NN given additional training data that completely undoes all previous learning
    • In complex learning scenarios, lots of training data and CPU time are required from training
    • NN larger than thousands nodes are not very stable

      • The curse of dimensionality seems to cause the learning ability of large nets implode somewhat
      • The network cycles within varying its weights forever never getting closer to a solution!
  • Extensions to Paradigm:
  • Other types of NN:

    • Simple recurrent Networks:

      • Here the hidden layer of the network is also connected back to the input layer with each connection having weight of 1
      • Could be used in sequence prediction
    • Hopfield nets:

      • Used to mimic associate memory within the brain
      • These networks allow entire "patterns" to be stored and then recalled by the system
      • Useful in image recognition
      • # of nodes needed to store information are calculated directly
    • Committee of Machines:

      • Multiple NNs are trained on the same data but with different initialization
      • During usage all the nets run on input data and the best output from these NNs is chose
    • Self-Organizing Maps (SOM):

      • Useful for classification tasks
      • Used to visualize relationships and classes within large, highly dimensioned inputs
      • Used in games in player modeling:

        • By taking a # of dimensions of behavior into account and giving the AI system a better picture of the kind of player the human tends toward
      • Uses unsupervised learning technique called competition learning
  • Other Types of NN Learning:

    • Reinforcement Learning:

      • It’s called learning with a critic (ناقد)
      • Here the network outputs values and the supervisor simply declares if the results was a good one
      • Here it seems to be working unsupervised (because the critic could be the outer environment)
    • Unsupervised Learning:

      • Genetic Algorithms are used to find best weight on the connections between neurons
      • Perturbation Learning, where weights are iteratively and randomly adjusted and tested for improvement
      • Competitive Techniques, where a # of neurons "competing" for the right to learn by having their weights adjusted within the net
    • Problems for which the output can’t be known ahead of time, in this case the main job to the network is to classify, cluster, find relationships and compress the input data in specific areas (SOMs are example of this)
  • Design Considerations:

  • Types of solutions:

    • Great when you have simple, modular system that map inputs too outputs (in surprising, nonlinear, black-box ways)
    • They are suitable for tactical level

      • Can’t be used in diplomacy systems in RTS Games
    • You can use it when you have specific inputs (incoming ball, player’s position, player skills) and specific output (each of possible catch animations to get the ball)
  • Agent Reactivity:

    • Contribute for faster reactivity for the agent
  • System Realism:

    • It gives a the system realism
    • Take care when to use NN because their faults in some games seems as "Stupid AI" rather than "a human mistake"
  • Development Limitations:

  • This is the most concern it’s NN requires investment and energy to develop
  • Also, if you will use NN in online learning, its more harder and harder to develop

Genetic Algorithms (GA)

  • GA could solve problems where finding solution take too long
  • GA techniques uses the principle of evolution to search for solutions to algorithmic problems
  • The process in nature works as follows:

    • Species need to be able to reproduce

      • Reproduction is simply executing the encoded rules necessary to build an organism
      • These rules are stored in strings of DNA called chromosomes
    • Chromosomes are build up from small modular sequences called genes

      • Genes are various permutation of four basic proteins (T, A, C, G) each gene holds info about the setting or alleles for a particular trait (سمه)
    • When two parents reproduce, their DNA is split and child’s DNA is come from half of each one of them. This is called crossover or generic recombination
    • The problem of bad traits and good traits

      • The quality measure of any one creature’s mix of traits is called fitness
    • A gene within a child organism somehow changed so that it’s completely new and can’t be traced as one of its parents this is called mutation
  • The process can be thought of as "evolutionary search" in that we are still searching across the field of all the possible solutions to a given game problem but we are going to use method of searching like what happened in the evolution process
  • The method is split into two parts: evolving a solution & use a solution
  • Evolving a solution is usually performed during production of game, but there are some games (Creatures Series, Black & White) that include GA Learning in their games but in a limited way
  • GA are computationally expensive
  • GA belongs to class of stochastic search (discussed in appendix)
  • Problem of local maximums and global maximums solutions
  • GA separates their algorithm from the problem representation. Which able them to easily find solutions is systems of mixed variables types (discrete, continuous variables)
  • Common data structure used here is String and any data structure could be used if:

    • Each individual can encode a complete solution
    • Genetic operations can be constructed for the data structure
  • Basic Genetic Method:

    • Initialize:

      • Initialize a starting population of individuals
      • These individuals can be generated randomly or with promising values from previous knowledge
      • Size of population depends on: resources, how much time devoted, what seems work
    • Evaluation:

      • Evaluate each individual’s Success within the problem space
      • Each individual is evaluated using fitness function that returns a value indicating overall performance of this individual
      • A fitness test that requires each individual top play for 5 minutes (to try getting any actions) given a population of 100 would thus require 8.33 hours per generation
    • Generation:

      • Generate new individuals using reproduction
      • Selection is another important part of genetic process
      • If you select best genes then you could cover local max but if you select random genes it generates good solution
      • There are three common methods to generate children:

        • Crossover (sexual reproduction)
        • Mutation (genetic variation)
        • Elitism: taking the most fit genes from last generation and carrying them into the next
  • Representing the Problem:

    • Gene and Genome:

      • Determine the structure of the genes in your problem what are the traits are included and determine their alleles (on/off, real value), what their ranges do any of these traits depend on each other
      • This part of representation is the most important
      • Advantages (many good) and disadvantages (longer, unwanted) of big search space for GA – P: 447
      • Example: Gene and Genome of Pac-Man player (447)
      • GA are good for problems where you can’t formulate a good way of determining solutions
    • Fitness Function:

      • Pac-Man fitness example (499)
      • Your fitness function is really the essence of what your GA is trying to optimize so take care in design
      • Fitness data should be scaled to prevent permutation convergence (PMC) and stagnation

        • PMC
          occurs when exceptional individuals are stop generating in early generations that so they are not better than their parents
        • Stagnation occurs when many individuals have similar high numerical fitness
      • Some of ways scaling the data:

        • Sigma Truncation:

          • . Where:

            • F’ is the fitness value
            • F^ average fitness
            • Sigma is population standard deviation
            • c is reasonable multiplier (usually between 1 and 3)
          • Negative results are set to 0
        • Rank Scaling:

          • Replaces the fitness score with its position in the sorted order of fitness score
        • Sharing Scaling:

          • Only similar individual are scaled
          • The number of genes that are shared among many genomes is recorded
          • Genomes are then grouped by how many shared genes they have
          • The fitness score of each genome is scaled by the number of other genomes in its sharing group
    • Reproduction:

      • There are two main reproduction types:

        • Generational reproduction

          • Using last generation as a tool to create the next either by copying directly (elitism), crossover and mutation ; completely replacing the original generation
        • Steady State:

          • Few individuals that are created via crossover or mutation replace specific individuals in the original generation but the main body of population remains unchanged
          • Who is replaced: worst, randomly, most similar, parents
      • Elitism Technique:

        • Advantages: ensures that we don’t miss best being in any population (alter the selection routine)
        • Disadvantages: lessens diversity (يقلل التنوع)& speed up convergence (يكثر من التقارب)
      • Selection Techniques:

        • Roulette Wheel Selection:

          • Genome that have highest score will have the largest chance of being selected

          • This genome will has the biggest roulette wheel
          • High fitness individual may be selected multiple times
          • It’s random chance thus, the fittest individual is not guaranteed to be selected

            • For this reason elitism is a common practice in GA genome selection
        • Stochastic Universal Selection:

          • Roulette wheel have pointers (as shown in figure) equal to the number of wanted offspring and then choose the pointed individuals by the pointer
          • This technique has the advantages of keeping generic diversity high
  • Tournament Selection:

    • Number of individuals (k) are randomly drawn from the pool
    • Highest fitness individual goes to next generation
    • Then, all goes back into the pool (selected individual could be deleted)
    • The previous steps is repeated several times
    • Notes:

      • If tournament size is larger, weak individuals have a smaller chance to be selected
      • 1-way tournament is equivalent to random selection
      • This technique works fine on parallel architecture
  • After choosing individuals for crossover:

    • Generate random number (between 0 and 1) for each pair
    • If this number if less than exact number (say 0.7f) then crossover operation is applied and other 2 offspring are generated
    • Otherwise pair is copied into next generation without alteration
    • Suggestion: each crossover technique has a crossover operator value so when this value appears the technique is used
  • Which crossover operator applied to a pair depends on several things:

    • Type of variables structure your genomes are using
    • Healthy dose of experimentation
  • Some of the binary crossover operators are the following:

    • Single Point Crossover:

      • A position is randomly chosen somewhere along the length of the genome and then swap all the genes after this position among parents
  • Two Point Crossover:

    • Same as single point except that genes between 2 points are swapped
  • Uniform Crossover (Every Point):

    • It’s could be named multipoint crossover, it’s like 2-point but with multiple points
    • This technique seems like doing a mutation
  • Some of the continuous value variable crossover operators:

    • Discrete Crossover: Swaps the variable values between individuals
    • Intermediate Crossover:

      • Offspring variable values are calculated:
      • Scaling factor is chosen randomly from (-d, d+1). Normal intermediate uses d=0
    • Line Crossover: Same as intermediate crossover but all variables have the same scale factor
  • Some of the order specific crossover operators:

    • Partially Mapped Crossover (PMX):

      • You map a substring from one parent to another, then in crossover when you find mapped gene swap it with its value
      • Example is the figure below
    • Order-Based Crossover:

      • As is the figure, choose several random genes from P1 and impose the same order they are found in the same genes within P2 by swapping values as needed
    • Position-Based Crossover:
  • After crossover mutation occur:
  • Mutation means applying an operator to genes in offspring
  • Rate of mutation or chance for gene will be mutated can vary widely depending on the problem
  • Many academic papers discussed this issue (455)
  • The specific type of mutation operator to apply is related to structure of genome
  • Common mutation operators in order specific genomes:

    • Exchange: swapping a gene with another
    • Displacement: select 2 random positions within a genome defining substring, then re-insert it
    • Insertion: Works as displacement but on one gene. This is the recommended technique
  • Non-Order specific operators include the following:

    • Binary Mutation: flip a bit within the genome
    • Real-value Mutation: Add delta value to the gene
  • Remember, GA are all about experimentation and finding out what operators to use as well as
  • For GA the more time you put into them, the better results you will receive
  • Highly scaled game-time problem: some calculations will be missed because that problem like collision(473)
  • GA are brute force method that can find solutions in very difficult or computationally expensive areas of game AI
  • Areas of using GA:

    • When you have a number of parameters that interact in highly nonlinear or surprising ways
    • When you have many local maximums and are searching for the best one (car physics parameters example)
    • Solutions that involve discontinuous output (need explanation, page 474)
    • When complementing traditional techniques
    • When actual computation of a decision might be too costly (like trying to substitute in heavy math function)
  • Pros of Genetic Algorithms:

    • Easy set up
    • Easy start getting results
    • GAs are often a very strong optimization algorithm, meaning that you can find optimal solution
    • They tend to find global maximum solutions rather than finding local maximum solutions
    • They can operate in parallel
  • Cons of Genetic Algorithms:

    • Time Consuming
    • GA performs evolution offline
    • Hit or miss performance: To get suitable best combination this requires time and experimentation
    • Weak definition of success and failure:

      • It’s hard to know what’s the flaw in your structure
      • It’s hard to tell the difference between buggy code and un-evolved code
    • Not guaranteed optimal solution
    • Tough to tune, and even tougher to add functionality
  • Extensions to the Paradigm:

    • Ant Colony Algorithms:

      • Collective intelligence: الذكاء الجماعي – Ants help each other so they seems intelligent
      • Ants but marks in trails that are near food so other ants go there
      • Here we are building solutions based on the successes of the entire population rather than the success of one individual
    • Co-evolution (478):

      • Concept of cooperative and competitive evolution
      • In cooperative, fitness increases when 2 creatures work together in competitive one increase and the other decrease
      • This type of GA could be used in RTS civilization Games
    • Self-Adapting GAs:

      • During the evolution process, crossover or mutation rates are influenced and changed
      • Also try to apply most suitable operators on genomes
    • Genetic Programming:

      • Here we are evolving program (code) rather than parameters
      • If we use data-driven game AI (Scripting) where your data is a small program instructions that represent behavior, the technique could be used to evolve AI character scripts instead of having to create them
  • Design Considerations:

    • Types of Solutions: it’s more suitable for tactical level
    • Agent Reactivity: support reactivity for your system
    • System Realism: because of its nature of optimized solutions sometimes it behaves in non-realistic manner
    • Genre: could be used in RTS Games in: building order, finding working tactics (such as rushing)
    • Platform: that’s not an issue because the work is mostly done offline
    • Development Limitations:

      • That’s really our concern. Because:

        • GA are not debuggable in any real sense
        • Do you really have time and patience to find optimal solution with different parameters, operators, gene design
        • It your feedbacks to game will require changes? That’s hard thing to do
    • Entertainment Limitations:

      • Game difficulty levels could be handled by separate GA for each level with separate fitness function
  • Appendix:

    • Stochastic optimization (SO) methods are optimization
      algorithms which incorporate probabilistic (random) elements, either in the problem data (the objective function, the constraints, etc.), or in the algorithm itself (through random parameter values, random choices, etc.), or in both. The concept contrasts with the deterministic optimization methods, where the values of the objective function are assumed to be exact, and the computation is completely determined by the values sampled so far