

In this paper, we recommend two commonly used search frameworks for Chinese chess: the minimax algorithm and the alpha-beta pruning algorithm. However, due to its late start and high complexity, there are many more technical problems to be solved. Chinese chess is one of the traditional chess versions with its typical strategy. (MCTS) to estimate the value of each node in the search tree and optimize the possible results. AlphaGo is a very famous and typical chess-playing program that uses Monte Carlo tree search. Therefore, many AI researchers have applied game theory to various board games in an attempt to find the essence of games and grasp the core of AI behaviour. View full-textĬomputer Games are an important aspect of Artificial Intelligence (AI) which has produced many representative algorithms of optimal strategies. CadiaPlayer has already proven its effectiveness by winning the 20 Association for the Advancement of Artificial Intelligence (AAAI) GGP competitions. Furthermore, we empirically evaluate different simulation-based approaches on a wide variety of games, introduce a domain-independent enhancement for automatically learning search-control knowledge to guide the simulation playouts, and show how to adapt the simulation searches to be more effective in single-agent games. In this paper, we describ e CadiaPlayer, a GGP agent employing a radically different approach: instead of a traditional game-tree search, it uses Monte Carlo simulations for its move decisions.

The first successful GGP agents all followed that. The traditional design model for GGP agents has been to use a minimax-based game-tree search augmented with an automatically learned heuristic evaluation function. The aim of general game playing (GGP) is to create intelligent agents that can automatically learn how to play many different games at an expert level without any human intervention.
