Featured post

Textbook: Writing for Statistics and Data Science

If you are looking for my textbook Writing for Statistics and Data Science here it is for free in the Open Educational Resource Commons. Wri...

Thursday, 16 February 2017

I read this: Chess Metaphors: Artificial Intelligence and the Human Mind

Chess Metaphors: Artificial Intelligence and the Human Mind, by Diego Rasskin-Gutman uses chess to explain concepts of intelligence both organic and artificial. The first part of the book is about the human mind, and cognitive models like schema and memory chunking, which I only skimmed. The second part discusses how an AI program can efficiently explore all the relevant consequences of any given chess move.

In the AI portion, 'Metaphors' explained how minimax algorithms worked, and discussed the Alpha-Beta algorithm, a chess playing staple, in particular. Minimax algorithms are useful for zero-sum two player games, like chess and most other head-to-head games.

Algorithms of this type work by compiling a set of possible moves, and for each move a set of possible responses by the opponent. The consequences of the response (e.g. the board state) is evaluated for each considered response. The worst (from the perspective of the AI) board state possible is assumed to be the response, and the evaluation (e.g. how 'good' that board state is) is recorded as the value of making that move. This repeats for each possible move, such that the AI has the worst case that comes from each possible move (each max). It then chooses the move that produces the best of the worst cases (the minimax). If someone playing against this AI doesn't respond with that optimal response, all the better.

The algorithm described above would only work for looking one move (one ply) ahead for each team. To consider deeper strategies, the process is repeated in an exponential explosion of possibilities. Reasonably, many of these algorithms differ in their focus on finding ways to avoid evaluating unnecessary positions, which is what Alpha-Beta does.

What really surprised me is how simple the evaluations can be. A very common evaluation method is to assign a value for each piece and a value for each square that piece can move to. This implies that such an AI would be functional, although not optimal, for a wide range of chess variants. In fact, if only the orthodox pieces are used, no additional programming would be required other than alter the possible moves to the new board. An algorithm like alpha-beta could be applied 'out of the box' to variants that don't use new pieces. This would explain why the 'chess variants' app that has the smaller boards like Garner's Minichess, and rearranged boards like Chess960 uses Alpha-Beta.

Including new pieces would involve programming in their possible moves and assigning them a material value (e.g. worth 4 pawns). Therefore, adapting existing AI to many of the variations seen in John Gollon's "Ancient, Regional, and Modern" book should be feasible. There are many ways to tune the relative value of pieces and immediate movement ability, including pitting AIs with different parameter values against each other in an evolutionary pool.


Footnote: Judging by http://www.chessvariants.com/ and the chess variants subreddit, it's much easier to make a variant than to drum up support and playerbase for it.

To consider later: Smess (also available as an app):
https://boardgamegeek.com/boardgame/1289/smess-ninnys-chess
In this game, the pieces have very simple moves and the board itself defines the difference in moves.


No comments:

Post a Comment