Artificial Intelligence in RTS/War/Strategy games roundtable (Friday)
Overall Structure of an RTS AI
- Hierarchy! (many) levels of entities; important to address communication between levels; resolve conflicting desires from above and self-preservation.
- Mommentum to keep working on a goal instead of keep switching.
- Command level can be in a different thread; pathfinding in a third thread (handled > 1000 units in "Enemy Nations").
- In Moo3/turn-based game: randomized decisions with weighting of choices; current goal is given a big bonus.
- Add cost of abandoning current choice to cost of all other choices. However distance factors are relevant: if going towards point A with some forces, may not cost much to switch target to B if B is close to A.
- Have to weight distance, time, cost, etc. Importance of each factor changes with time.
Achieving multiple goals
- Ex: sports game, must satisfy many constraints in addition to trying to further several goals.
- State machines are a poor fit to this situation, but fuzzy state machines address the problem.
- In Moo3, many factors go into whether to colonize (quality of planet but also strategic position), but here ship only has a binary decision
- Alternatively, can use a complicated state selection method based on analysis of desires.
- For hierarchy/fuzzy AI see Game Programming Gems 2.
- Moo3: What enemy forces do we know about? How quickly can they arrive? Are they headed this way? How advanced technologically are they?
- Can use influence maps: terrain modifies the effect, can add up several influence maps to achieve a final score.
- Game programming gem on using influence maps to find fronts.
- Compute how many units are in range.
- Connectivity affects whether geography important, so influence maps may not apply (if you can get from any point to any other in a turn for example).
- People use A*, Dijkstra , Hierarchical A*, cheap methods
- Dijkstra said to be good for non-Euclidean search spaces, but another claims that you should use the minimum of outgoing edges with A* instead.
- Cheap methods (follow left wall) used to get someone out of the way of a car in a hurry.
- Steering: Problem with steering is you can't find a small path through a dense set of obstacles.
- Steering: Good for moving a group together towards a target.
- Steering: Look up Robin Greene and Craig Reynolds (red3d.com) on the web
- Adaptive force field: what I would normally do + (some weight times) learned behavior (from what has failed in the past).
- Enemy nations: 1 thread per CPU opponent, +1 for pathfinding, +1 for game itself
- Another model uses one thread for control, one for pathfinding, one for game.
- Look up "Operation planning" -- planning is not that exotic.
Chess techniques apply
- AI Wisdom 2 has an article on insect AI
- NN and GA have a place before shipping: GAs at least are easy to code.
- MOO3 does not use a cheating AI (unless accidentally). Seeing fewer units means faster processing! Cheating creates player frustration.
- For knowledge that gets out of date (I saw a large group of forces at x,y) should keep a certainty that goes down with time
- Moo3 does not estimate forces it can't see.
- "Rely on the player's imagination -- use simple techniques"
- "Create the illusion of intelligence"
- "Players want to feel bad-ass"
- Gamedev.net has a forum & links; Gameai.com is a gateway to other resources
- "Robocup": team play for soccer