State-space complexity (the number of legal game positions from the initial position)
Game tree size (total number of possible games)
Decision complexity (number of leaf nodes in the smallest decision tree for initial position)
Game-tree complexity (number of leaf nodes in the smallest full-width decision tree for initial position)
Computational complexity (asymptotic difficulty of a game as it grows arbitrarily large)
These measures involve understanding the game positions, possible outcomes, and computational complexity of various game scenarios.
Measures of game complexity
State-space complexity
The state-space complexity of a game is the number of legal game positions reachable from the initial position of the game.[1]
When this is too hard to calculate, an upper bound can often be computed by also counting (some) illegal positions (positions that can never arise in the course of a game).
Game tree size
The game tree size is the total number of possible games that can be played. This is the number of leaf nodes in the game tree rooted at the game's initial position.
The game tree is typically vastly larger than the state-space because the same positions can occur in many games by making moves in a different order (for example, in a tic-tac-toe game with two X and one O on the board, this position could have been reached in two different ways depending on where the first X was placed). An upper bound for the size of the game tree can sometimes be computed by simplifying the game in a way that only increases the size of the game tree (for example, by allowing illegal moves) until it becomes tractable.
For games where the number of moves is not limited (for example by the size of the board, or by a rule about repetition of position) the game tree is generally infinite.
Decision trees
A decision tree is a subtree of the game tree, with each position labelled "player A wins", "player B wins", or "draw" if that position can be proved to have that value (assuming best play by both sides) by examining only other positions in the graph. Terminal positions can be labelled directly—with player A to move, a position can be labelled "player A wins" if any successor position is a win for A; "player B wins" if all successor positions are wins for B; or "draw" if all successor positions are either drawn or wins for B. (With player B to move, corresponding positions are marked similarly.)
The following two methods of measuring game complexity use decision trees:
Decision complexity
Decision complexity of a game is the number of leaf nodes in the smallest decision tree that establishes the value of the initial position.
Game-tree complexity
Game-tree complexity of a game is the number of leaf nodes in the smallest full-width decision tree that establishes the value of the initial position.[1] A full-width tree includes all nodes at each depth. This is an estimate of the number of positions one would have to evaluate in a minimax search to determine the value of the initial position.
It is hard even to estimate the game-tree complexity, but for some games an approximation can be given by , where b is the game's average branching factor and d is the number of plies in an average game.
Computational complexity
The computational complexity of a game describes the asymptotic difficulty of a game as it grows arbitrarily large, expressed in big O notation or as membership in a complexity class. This concept doesn't apply to particular games, but rather to games that have been generalized so they can be made arbitrarily large, typically by playing them on an n-by-n board. (From the point of view of computational complexity, a game on a fixed size of board is a finite problem that can be solved in O(1), for example by a look-up table from positions to the best move in each position.)
The asymptotic complexity is defined by the most efficient algorithm for solving the game (in terms of whatever computational resource one is considering). The most common complexity measure, computation time, is always lower-bounded by the logarithm of the asymptotic state-space complexity, since a solution algorithm must work for every possible state of the game. It will be upper-bounded by the complexity of any particular algorithm that works for the family of games. Similar remarks apply to the second-most commonly used complexity measure, the amount of space or computer memory used by the computation. It is not obvious that there is any lower bound on the space complexity for a typical game, because the algorithm need not store game states; however many games of interest are known to be PSPACE-hard, and it follows that their space complexity will be lower-bounded by the logarithm of the asymptotic state-space complexity as well (technically the bound is only a polynomial in this quantity; but it is usually known to be linear).
The depth-firstminimax strategy will use computation time proportional to the game's tree-complexity (since it must explore the whole tree), and an amount of memory polynomial in the logarithm of the tree-complexity (since the algorithm must always store one node of the tree at each possible move-depth, and the number of nodes at the highest move-depth is precisely the tree-complexity).
Backward induction will use both memory and time proportional to the state-space complexity, as it must compute and record the correct move for each possible position.
Example: tic-tac-toe (noughts and crosses)
For tic-tac-toe, a simple upper bound for the size of the state space is 39 = 19,683. (There are three states for each of the nine cells.) This count includes many illegal positions, such as a position with five crosses and no noughts, or a position in which both players have a row of three. A more careful count, removing these illegal positions, gives 5,478.[2][3] And when rotations and reflections of positions are considered identical, there are only 765 essentially different positions.
To bound the game tree, there are 9 possible initial moves, 8 possible responses, and so on, so that there are at most 9! or 362,880 total games. However, games may take less than 9 moves to resolve, and an exact enumeration gives 255,168 possible games. When rotations and reflections of positions are considered the same, there are only 26,830 possible games.
The computational complexity of tic-tac-toe depends on how it is generalized. A natural generalization is to m,n,k-games: played on an m by n board with winner being the first player to get k in a row. This game can be solved in DSPACE(mn) by searching the entire game tree. This places it in the important complexity class PSPACE; with more work, it can be shown to be PSPACE-complete.[4]
Complexities of some well-known games
Due to the large size of game complexities, this table gives the ceiling of their logarithm to base 10. (In other words, the number of digits). All of the following numbers should be considered with caution: seemingly minor changes to the rules of a game can change the numbers (which are often rough estimates anyway) by tremendous factors, which might easily be much greater than the numbers shown.
^Double dummy bridge (i.e., double dummy problems in the context of contract bridge) is not a proper board game but has a similar game tree, and is studied in computer bridge. The bridge table can be regarded as having one slot for each player and trick to play a card in, which corresponds to board size 52. Game-tree complexity is a very weak upper bound: 13! to the power of 4 players regardless of legality. State-space complexity is for one given deal; likewise regardless of legality but with many transpositions eliminated. The last 4 plies are always forced moves with branching factor 1.
^Stefan Reisch (1980). "Gobang ist PSPACE-vollständig (Gobang is PSPACE-complete)". Acta Informatica. 13 (1): 59–66. doi:10.1007/bf00288536. S2CID21455572.
^ abcdStefan Reisch (1981). "Hex ist PSPACE-vollständig (Hex is PSPACE-complete)". Acta Inform (15): 167–191.
^Slany, Wolfgang (2000). "The complexity of graph Ramsey games". In Marsland, T. Anthony; Frank, Ian (eds.). Computers and Games, Second International Conference, CG 2000, Hamamatsu, Japan, October 26-28, 2000, Revised Papers. Lecture Notes in Computer Science. Vol. 2063. Springer. pp. 186–203. doi:10.1007/3-540-45579-5_12.
^Orman, Hilarie K. (1996). "Pentominoes: a first player win"(PDF). In Nowakowski, Richard J. (ed.). Games of No Chance: Papers from the Combinatorial Games Workshop held in Berkeley, CA, July 11–21, 1994. Mathematical Sciences Research Institute Publications. Vol. 29. Cambridge University Press. pp. 339–344. ISBN0-521-57411-0. MR1427975.
^Lachmann, Michael; Moore, Cristopher; Rapaport, Ivan (2002). "Who wins Domineering on rectangular boards?". In Nowakowski, Richard (ed.). More Games of No Chance: Proceedings of the 2nd Combinatorial Games Theory Workshop held in Berkeley, CA, July 24–28, 2000. Mathematical Sciences Research Institute Publications. Vol. 42. Cambridge University Press. pp. 307–315. ISBN0-521-80832-4. MR1973019.
^Bonnet, Edouard; Jamain, Florian; Saffidine, Abdallah (2013). "On the complexity of trick-taking card games". In Rossi, Francesca (ed.). IJCAI 2013, Proceedings of the 23rd International Joint Conference on Artificial Intelligence, Beijing, China, August 3-9, 2013. IJCAI/AAAI. pp. 482–488.
^ abKasai, Takumi; Adachi, Akeo; Iwata, Shigeki (1979). "Classes of pebble games and complete problems". SIAM Journal on Computing. 8 (4): 574–586. doi:10.1137/0208046. MR0573848. Proves completeness of the generalization to arbitrary graphs.
^The size of the state space and game tree for chess were first estimated in Claude Shannon (1950). "Programming a Computer for Playing Chess"(PDF). Philosophical Magazine. 41 (314). Archived from the original(PDF) on 2010-07-06. Shannon gave estimates of 1043 and 10120 respectively, smaller than the upper bound in the table,
which is detailed in Shannon number.
^Gualà, Luciano; Leucci, Stefano; Natale, Emanuele (2014). "Bejeweled, Candy Crush and other match-three games are (NP-)hard". 2014 IEEE Conference on Computational Intelligence and Games, CIG 2014, Dortmund, Germany, August 26-29, 2014. IEEE. pp. 1–8. arXiv:1403.5830. doi:10.1109/CIG.2014.6932866.
^The lower branching factor is for the second player.
^Kloetzer, Julien; Iida, Hiroyuki; Bouzy, Bruno (2007). "The Monte-Carlo approach in Amazons"(PDF). Computer Games Workshop, Amsterdam, the Netherlands, 15-17 June 2007. pp. 185–192.
^CDA Evans and Joel David Hamkins (2014). "Transfinite game values in infinite chess". arXiv:1302.4377 [math.LO].
^Stefan Reisch, Joel David Hamkins, and Phillipp Schlicht (2012). "The mate-in-n problem of infinite chess is decidable". Conference on Computability in Europe: 78–88. arXiv:1201.5597.{{cite journal}}: CS1 maint: multiple names: authors list (link)
^Alex Churchill, Stella Biderman, and Austin Herrick (2020). "Magic: the Gathering is Turing Complete". arXiv:1904.09828 [cs.AI].{{cite arXiv}}: CS1 maint: multiple names: authors list (link)
^Stella Biderman (2020). "Magic: the Gathering is as Hard as Arithmetic". arXiv:2003.05119 [cs.AI].