Objective:
Chess game of Intelligent algorithm, online data a lot, very similar. I write this article, not want to do the Internet porter. But to the classic <<PC game programming (Man-machine game)>> Expression of respect, on the other hand, also want to their own game programming life to do a review.
Here we take the black and white chess game as An example, from the game and learning Two aspects to explain the game AI writing points. This paper focuses on the game ( evaluation function + game algorithm ).
Game:
Before watching go game, often people evaluate the level of chess player: bigger picture strong (Good evaluation situation), the Road is accurate (calculate the number of steps deep, the actual effect is good). He mountain Stone can attack Jade, chess game AI Nature is also in the assessment of the situation, the depth of the game to do a full article on the two points.
(i) Evaluation function:
Let us first talk about the assessment of the situation, so how to properly assess the situation of the game from a procedural point of view?
First, the situation is good or bad, need to consider a number of factors ( different weights , different stages of the importance of change ), and then the impact of factors need to be converted to a numerical measure .
To simplify the model, we introduce the evaluation function g (s), s for the current situation, and g (s) to evaluate the current situation.
G (s) = A1 * F1 (s) + A2 * F2 (s) + ... + an * FN (s)
Note: Fi (s) is a scoring factor for an assessment , and AI is weighted
The introduction of the evaluation function g (s), which introduces the mathematical model for game AI Intelligence, is also the foundation of everything.
Back to the Othello game itself, based on experience, select the following feature evaluation factors :
1). Topography Valuation Table
Black and white chess and go, but also abide by the " Golden Horn silver edge rotten Belly " law, four corner of the terrain value is very large, followed by four sides. Therefore, we give the 8*8 map point to the distribution of the value of the terrain, generally meet the angular weight, middle belly light mode.
Potential_enegy (s) =∑pe[x, y] {Map[x,y] is occupied, 8>x>=0, 8>y>=0}
Note: Potential_enegy (s) is the topography evaluation function , Pe[x,y] is the topographic valuation matrix , Map[x,y] is the game map itself.
2). Line Power
Based on this hypothesis: in a situation, the choice of more, flexible initiative, and less choice, often into the passive . So the choice of how much has become a reference factor in assessing the situation. So we put the face of a situation, can lazi the number, called the Action Force.
3). Stabilizing the Child
The so-called stabilizer, refers to, in any case, can not be overturned son, the simplest stabilizer is 4 corner point, the more stable value, the greater the chance of winning.
With these evaluation factors, a certain weighting factor is given, and the evaluation function is more perfect. At this time the game AI is basically built, its chess force can beat beginners, should not be a problem.
But at this time the AI is very fragile, seemingly every step to choose the best Lazi, but it is easy to fall into the trap. This is the greedy algorithm, resulting in the local optimal trap . How can we break the bureau? Expect the king to come: game tree .
(ii) Game tree:
The essence of Game tree is a very small search process , the relevant data can refer to blog: "Minimax game tree".
The Minimax algorithm, the branch is numerous and redundant, so introduced Alpha+beta pruning to do the optimization, it can quickly crop unnecessary search branch, improve the search efficiency.
Regarding this piece, it is no longer specific, see the following blog: A * algorithm/game tree, the basic search algorithm in machine game;
The extremely small process of Alpha+beta pruning:
negative maximal value algorithm pseudo code:
//negative maxima algorithm int Negamax (gamestate S, int depth, int alpha, int beta) {//game End | | Discovery recursion Depth If the degree is to the boundary if (Gameover (s) | | depth = = 0) {return evaluation (s);} Iterate through each candidate step of foreach (move in candidate list) {s ' = Makemove (s); value =-negamax (s ', depth-1,-beta,-alpha); Unmakemove ( S ') if (value > Alpha) {//Alpha + Beta pruning point if (value >= Beta) {return beta;} Alpha = value;}} return Alpha;}
Prospect:
With the evaluation function and Game tree , its game AI has a leap forward, but a mountain is a high mountain, we can go further?
For the evaluation function, our current strategy is to select the evaluation factor and weight allocation based on experience . Can the machine learning method, the automatic realization factor (characteristic) choice , the weight coefficient reasonable allocation ?
And for the game algorithm itself, whether there is a place to optimize ? What is the tradeoff between search depth and the breadth of search branches?
The most important how to set the advanced AI difficulty, enhance the user's experience?
Due to space constraints, the decision is placed in the next blog post .
Summarize:
Why choose black and White chess as the object of the game AI narration, on the one hand the game rules are simple , on the other hand its evaluation model is easy to build , and its search branches less + depth of search, these are fast to realize and understand game game AI Core algorithm has a great help . This blog post mainly describes the principle and optimization of evaluation function and game tree. The following is an introduction to how Ai learns from the next game and the advanced steps in performance optimization.
Written at the end:
If you think this article is helpful to you, please give it a little reward. In fact, I would like to try to see if blogging can bring me a little bit of revenue. No matter how much, is a kind of sincere affirmation to the landlord.
Artificial Intelligence of Chess game (I.)