Sudoku games require the number of 1~9 to be filled in a 9x9 grid, so that there are no duplicate numbers in each row, column, or nine 3x3 sub-regions.
With a little thought, I wrote the first solution. This approach can be referred to as backtracking in the context of Wikipedia 1. The idea is simple, and sequentially scans each space to fill in numbers:
1. Fill in the first space with "1" to check if the number is valid (the row, column, and the 3x3 sub-area do not have duplicate numbers). If it is legal, proceed to the second lattice. Otherwise, in this lattice continue to try 2, 3, ... until legal.
2. In the second grid, continue to fill in the numbers, starting with "1", until a valid number is found, continue to the next grid.
3. If in a grid, from "1" to "9" is not legal, this means that the front of a grid filled wrong. Then back to the previous grid, from the number of not yet tried to continue to try. For example, if the previous number is 3, keep trying 4,5,6 ... is legal. If a valid number is found, it advances to the next grid. If it is not found, the previous grid is also filled in the wrong, then continue to fall back to the previous one, ... So repeated.
4. If there is a solution to this sudoku, we will always encounter situations where every lattice happens to be right. If this sudoku is not solved, then we will encounter "the first lattice tried to 1~9 all the numbers are not" situation.
Implemented in C + + for a bit 2. Solving the above example quickly, basically is the second kill.
It was supposed to end here. However, it seems that the solution can only give a single solution, there is no way to find out a sudoku of all the solutions?
Sudoku game is to 9x9 a lattice to fill in the number of 1~9, then there are 9x9x9 ... X9 = 981 combinations of permutations. If you flatten 81 squares and write a 81-bit long number, then from 0 0 0 0 0 ... 0 to 9 9 9 9 9 ... 9 exhausted all the circumstances. But obviously, it's impossible to verify all 981 of these cases. Currently the Intel i7 processor can execute 2.38 X 1011 instructions per second, even if our program has only 981 instructions, it will not be able to calculate the results in our lifetime.
Is there a way to reduce the number of solutions to be verified? The first idea is that we only need to arrange the numbers in the combined empty lattice, so for the example in this article there will be 951 cases-albeit a lot less, but this is still an unrealistic number for our CPU. The second idea is that for these 51 spaces, there is no need for each lattice to exhaust all the numbers in 1~9. Consider each lattice in the same row, the same column, the same 3x3 sub-region of the number of non-empty lattice, it may be the number of values is actually less than 9, perhaps the value of the lattice is the only certainty! The idea was quickly realized. However, it is regrettable to find that for the example of this article, we just reduced the number of solutions to be validated to about 3x1023. Although this is another leap forward compared to 951, it still does not lead to a usable program.
Is there no future for this method of permutation?
After thinking, I found the answer: You can first find all possible solutions for each 3x3 sub-area. The solution of the whole Sudoku table is composed of 9 3x3 sub-regions. Finding the solution to a single 3x3 sub-region should be fast, especially if we have limited the range of possible values for each lattice based on the Global Sudoku table. Finding out all the possible solutions of the 9 sub-regions and then arranging the solutions of the 9 sub-regions should be very fast. With this idea, I realized the second edition: the number of all the solutions to be validated dropped to 2x108, and after the program ran, it took about 10 seconds to spit out the answer.
Can you get a little faster? Of course! The method is to add a layer in the middle: 9 3x3 sub-areas into three rows, you can find all the legal solution for each row, and then the solution of three rows to arrange the solution of the whole table. This idea was soon realized: for the example in this article, as in backtracking, it is a second kill. The total number of solutions to be verified has dropped to about 20000.
This solution can be called permutation combination method, the idea is summarized as follows:
1. For each empty lattice, consider the row, column, and sub-region in which it is located, and find out the list of all possible values S1
2. For each 3x3 sub-region, S1 the value of all the empty lattices it contains, and find out the list of all the solutions for that subregion S2.
3.9 3x3 sub-areas are lined up in three rows. For each row, a combination of the S2 of the three sub-regions it contains, to find the list of solutions for this row S3.
4. Find the solution of the whole Sudoku table by combining the solution S3 of three rows.
At this point, the study of permutation and combination method is over. While there are some optimizations that can be implemented, such as parallel processing, this is generally the way of thinking.
Is there any other way of thinking that coefficient alone? A search of Wikipedia, found that backtracking has been mentioned, the permutation of the composition of the method is not seen in the description. In addition, there are three ways to abstract the Sudoku problem into different mathematical problems worth mentioning.
The first is to abstract the problem of Sudoku into a precise coverage problem, and then solve it by solving the algorithm of exact coverage problem such as the dance chain algorithm 4.
Specifically, we are turning the coefficient problem into an "Exact hitting Set" problem. What does "Exact hitting Set" mean? For example, we have the following collections:
A = {1, 4, 7};
B = {1, 4};
C = {4, 5, 7};
D = {3, 5, 6};
E = {2, 3, 6, 7};
F = {2, 7};
Each of these collections is a subset of the collection x = {1, 2, 3, 4, 5, 6, 7}. Thus, the collection X*={1, 2, 5} is known as an exact hitting Set of {A, B, C, D, E, F}. This is because a, B, C, D, E, and f all happen to contain just one element in the x*.
How to abstract the problem of Sudoku into the question of finding an exact hitting set? First of all, the problem of Sudoku can be expressed as the number of 1~9 in the 9x9 lattice, then all the lattice added up there is a value of 9x9x9=729. You may wish to remember:
x= {r1c1#1, r1c1#2,..., r9c9#9}
Sudoku game rules can be described as the following four rules:
1. Each lattice can only be filled with a number of 1 1~9. For example, the set of all the possible values of the lattice R1C1 is:
R1C1 = {r1c1#1, r1c1#2, R1c1#3, R1c1#4, r1c1#5, r1c1#6, R1c1#7, R1c1#8, r1c1#9}
2. Each number in each row must appear and appear only once. For example, the first line, where the number 1 appears, has these possibilities: R1#1 = {r1c1#1, r1c2#1, r1c3#1, r1c4#1, r1c5#1, r1c6#1, r1c7#1, r1c8#1, r1c9#1}
3. Each number in each column must appear and appear only once. Ibid., for the first column, the number 1 may appear in the position: c1#1 = {r1c1#1, r2c1#1, r3c1#1, r4c1#1, r5c1#1, r6c1#1, r7c1#1, r8c1#1, r9c1#1}
4. In each 3x3 sub-area, each number must appear and appear only once. Ibid. for the first 3x3 sub-region, the number 1 may appear in the following locations:
B1#1 = {r1c1#1, r1c2#1, r1c3#1, r2c1#1, r2c2#1, r2c3#1, r3c1#1, r3c2#1, r3c3#1}
The location where each number in each rule may appear is a collection. We have 9 rows, 9 columns, 81 squares, 9 pending numbers, then we have 81+81+81+81=324 a set. These collections are subsets of X. The coefficient is equivalent to finding a subset of x x*, which causes each element in x* to appear in these 324 subsets and only once--that's exactly the exact hitting set!
The second is to express the problem of Sudoku as an optimization problem, and then solve it by solving the algorithm of optimization problem. The core of the problem is to define an evaluation function: the number of the numbers to be filled as independent variables, the current entire table and the extent of the difference between the effective solution as a function value, the Sudoku problem is transformed into an optimization problem: when the number of numbers in a single table, the value of the evaluation function is the smallest? The following is a classic optimization algorithm simulated annealing method as an example, briefly introduce the principle of this kind of method of work 7.
First of all, we give the number of numbers in the Sudoku table, that is, the arguments in the optimization problem, define an initial value. The method is to randomly fill in the numbers in the 1~9 in the spaces of each 3x3 sub-area, so that the 1~9 is not duplicated within the subregion. Next we define an operation for the Sudoku table, which allows it to "search" for a new value from the current value of the argument. We define the operation as: randomly pick a space with a number in the 9x9, and swap its value with the value of another randomly selected space within the 3x3 sub-region where it resides. It can be seen that, under the initial value and operation method of this definition, the Sudoku table we get is always satisfied with the condition "no repetition within a 3x3 sub-region". Thus, the evaluation function of the Sudoku table can be simplified to define the sum of the number of numbers that are missing in the current row and column in the 1~9. For example, in the left-hand figure, the first column is missing the number 9, so the number of missing digits is 1. The second column is missing the number 6, 8, so the number of missing numbers is 2, ..., the entire table all rows, columns added up the number of missing numbers is 34. The sum of the number of missing numbers for all rows and columns is 0 in the effective solution diagram under right.
By defining the initial value, the operation method and the evaluation function, we can solve the problem with the standard simulated annealing algorithm. The initial value of the Sudoku table is recorded as X0, the operation method is recorded as OP (x), and the evaluation function is f (x), the algorithm flow is as follows 6:
1. Select an initial "temperature" value T, and its minimum desirable value t_min
2. Using the defined Operation method OP, get a new value of X1. X1 = Op (X0)
3. Calculate the evaluation function f (X0) and F (X1), calculate the difference df = f (X1)-F (X0)
4. If DF < 0, this means that the new value is close to a valid solution. The state of the Sudoku table is updated to X1 and even if x0=x1.
5. If df > 0, we still have a certain probability to update the state of the Sudoku table to X1, this is to enable our search to jump out of the evaluation function of the local minimum value. This probability condition is usually defined as: E (-df/t) > Random (0, 1). That is, when the natural exponent E (-df/t) is greater than a random value between 0 and 1, x0=x1. Because-df/t is always less than 0, E (-df/t) is always between 0 and 1. This means that the greater the temperature value T, the greater the probability of updating the state to X1. As the temperature decreases, the probability decreases.
6. A strategy to lower the temperature value, such as T = R * T,r is a constant between 0 and 1. If T < T_min, exit, otherwise return to step 2.
This random search method will take us from a random given initial value, and eventually search for a valid solution to the Sudoku table! Feels a little magical, doesn't it? Some papers show that this algorithm is unique to the number of 9x9 100% success rate, the 16x16,25x25 of the Sudoku also has a fairly high success rate of 7.
The third method regards Sudoku as a constraint solving problem and then solves it by means of constrained programming. This is actually a natural way of understanding: Each space in the Sudoku table is considered as a variable, and the range of values for these variables is an integer between 1~9. "The variables in each row, column, and each 3x3 sub-region are unequal" is the constraint to which these variables are to be met. So the problem with coefficient alone is a constraint Solver problem: When these variables take what value, all the constraints can be satisfied at the same time?
With some constrained programming languages such as Prolog, we can easily write Sudoku solvers. Unlike the normal programming language, we can automatically get the solution of the problem by using the constrained programming language, as long as we describe the constraint problem, without specifying the specific steps of the solution. A procedure for solving Sudoku using Prolog language is given below. For simplicity, this Sudoku is 8 of 4x4.
The Sudoku to be solved is shown at the bottom left. First, each space in the Sudoku table is marked as a variable, as shown in the image on the right.
The program is solved as follows:
Sudoku (Puzzle, solution):-Solution=Puzzle, Puzzle=[A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P], Fd_domain (solution,1,4), Row1=[A, B, C, D], Row2=[E, F, G, H], Row3=[I, J, K, L], Row4=[M, N, O, P], Col1=[A, E, I, M], Col2=[B, F, J, N], Col3=[C, G, K, O], Col4=[D, H, L, P], Square1=[A, B, E, F], Square2=[C, D, G, H], Square3=[I, J, M, N], Square4=[K, L, O, P], valid ([Row1, Row2, ROW3, Row4, Col1, Col2, Col3, Col4, Square1, Square2, Squar E3, Square4]). Valid ([]). Valid ([Head| Tail]):-Fd_all_different (Head), valid (Tail).
This program first defines a 4x4 Sudoku table as a collection of 16 variables. Then define a subset of the variables in each row, column, and 2x2 subregion. Finally, define the constraints to be satisfied for each subset: they contain variables that are not equal (all_different).
At run time, simply enter the Sudoku table to be solved by format:
4 , _, 2 , _, _, 1 , _, 3 , _], solution).
You can get the result:
S = [3,1,4,2,4,2,3,1,2,4 ,1,3,1,3,2,4]
It seems very simple. However, how does constrained programming solve the solution of Sudoku?
One way is to express the constraint system as the form of the figure 9.
The variable is treated as a vertex of the graph, and the range of desirable values is written next to the vertex.
The constraint between variables is treated as an edge of the graph. Here we have only one constraint: Two variables are unequal (all_different). If there are "unequal" constraints between the two variables, we add an edge between the vertices that represent them. For example, with the constraint that each row's variables are not equal, our diagram becomes this:
With "Each column variable is not equal", our diagram will look like this:
Add the constraints to be met for each row, column, and each 2x2 small area, and the final figure is this:
The next step is to update the range of desirable values for each variable. When the range of values for a variable changes, we will mark an asterisk (*) next to the vertex of the variable. The range of values for variables representing non-empty squares is updated first. We have four non-empty lattice c,f,k,n, so set their value range to the number in the grid, and then mark the vertices with asterisks.
Next, update the variable value range for the adjacent vertices marked with the asterisk vertex. Take vertex c as an example, because it can only take a value of 4, so all of its adjacent vertices can no longer take a value of 4. The range of the a,b,d,o,h,g is then changed to {1, 2, 3} because their variable range of values changes, so their vertices are marked with asterisks, ..., and so on.
In this way, you continue to update the range of values for adjacent vertices from the vertices marked with asterisks. If the range of values for any vertex changes, they are then marked as asterisks. Notice that each time you update a vertex's range of values, we consider only the current pair of vertices, without regard to the other vertices. So this is actually a simple repetition of the process.
If the current Sudoku table has only the unique solution, then our update process will converge to the following results, that is, the solution of Sudoku!
At this point, the use of constrained programming coefficient alone principle is finished. Of course, this is only a very superficial introduction, the actual application there are many places to do special treatment and optimization, here will not repeat.
This is the end of our small article about Sudoku. We introduced a total of five methods to solve Sudoku: backtracking, permutation combination, exact coverage problem, simulated annealing, and constrained programming. Small a Sudoku, unexpectedly can from so many angles to look at and analysis, can't help people sigh thinking of the odd, the number of wonderful Ah!
Resources
1. Https://en.wikipedia.org/wiki/Sudoku_solving_algorithms
2. https://github.com/kaige/Sudoku/
3. Https://en.wikipedia.org/wiki/Instructions_per_second#Millions_of_instructions_per_second
4. Https://en.wikipedia.org/wiki/Exact_cover#Sudoku
5. Rhyd Lewis. Metaheuristics can Solve Sudoku puzzles.
6. http://www.cnblogs.com/heaad/archive/2010/12/20/1911614.html
7. Perez, Meir and Marwala, Tshilidzi. Stochastic optimization approaches for solving Sudoku
8. http://www.ybrikman.com/writing/2012/02/16/seven-languages-in-seven-weeks-prolog_16/
9. Https://www.cl.cam.ac.uk/teaching/0809/Prolog/Prolog08ML6R2.pdf
A small probe into the solution of Sudoku