1. What is an algorithm?
Algorithms are a series of clear instructions for solving problems, that is, they can obtain the required output within a limited period of time for certain standard input. Algorithms often contain repeated steps and comparison or logical judgment. If an algorithm has a defect or is not suitable for a problem, executing the algorithm will not solve the problem. Different algorithms may use different time, space, or efficiency to accomplish the same task. The advantages and disadvantages of an algorithm can be measured by space complexity and time complexity.
The time complexity of an algorithm refers to the time resources that the algorithm consumes. Generally, a computer algorithm is a function f (n) of the problem scale N. The growth rate of Algorithm Execution time is positively related to the growth rate of F (n), which is called asymptotic time complexity ). Time complexity is expressed by "O (order of magnitude)", which is called "order ". Common time complexity: O (1) constant order, O (log2n) Logarithm order, O (n) linear order, O (n2) Square order.
The space complexity of an algorithm refers to the space resources that the algorithm consumes. The computation and Representation Methods are similar to the time complexity. They are generally expressed by the approximation of the complexity. Compared with time complexity, the analysis of space complexity is much simpler.
Ii. Algorithm Design Method
1. Progressive Method
The progressive method is a method that uses the recursive relationship of the question itself to solve the question. If n = 1, the solution is known or can be easily obtained. The issue of constructing an algorithm by means of the progressive method has an important recursive nature, that is, when the problem scale is the solution of the I-1, the recursive nature of the problem can be obtained from the obtained scale of 1, 2, ..., A series of solutions of I-1, the structure of the problem scale for the I solution. In this way, the program can start from I = 0 or I = 1, repeatedly, from known to the I-1 scale of the solution, through recursion, to obtain the scale of the I solution, until the solution with a scale of N is obtained.
[Problem] factorial calculation
Problem description: write a program to calculate and output the factorial K of K for the given n (n ≤ 100! (K = 1, 2 ,..., N.
Because the required integer may be much larger than the digits of a general integer, the program uses a one-dimensional array to store long integers. Each element of a long integer array stores only one digit of a long integer. Store M-bit integer N in array:
N = A [m] × 10m-1 + a [M-1] × 10m-2 +... + A [2] × 101 + A [1] × 100
Use a [0] to store the M digits of the long integer N, that is, a [0] = m. As agreed above, each element of the array stores the factorial K of K! The second and third elements of the array from the low to the high ....... Example: 5! = 120, the storage format in the array is:
3 0 2 1 ......
The first element 3 indicates that the long integer is a three-digit number, followed by 0, 2, and 1 from the low position to the high position, and expressed as an integer 120.
Calculate the factorial K! Can use the obtained factorial (k-1 )! After the continuous accumulation of K-1 obtained. For example, known 4! = 24, calculate 5 !, You can add four times to the original 24 and then get 120. For details, see the following procedure.
# Include <stdio. h>
# Include <malloc. h>
# Define maxn1000
Void pnext (int A [], int K)
{Int * B, M = A [0], I, j, R, carry;
B = (int *) malloc (sizeof (INT) * (m + 1 ));
For (I = 1; I <= m; I ++) B [I] = A [I];
For (j = 1; j <= K; j ++)
{For (carry = 0, I = 1; I <= m; I ++)
{R = (I <A [0]? A [I] + B [I]: A [I]) + carry;
A [I] = R % 10;
Carry = r/10;
}
If (carry) A [++ m] = carry;
}
Free (B );
A [0] = m;
}
Void write (int * a, int K)
{Int I;
Printf ("% 4D! = ", K );
For (I = A [0]; I> 0; I --)
Printf ("% d", a [I]);
Printf ("\ n ");
}
Void main ()
{Int A [maxn], N, K;
Printf ("Enter the number N:");
Scanf ("% d", & N );
A [0] = 1;
A [1] = 1;
Write (A, 1 );
For (k = 2; k <= N; k ++)
{Pnext (A, K );
Write (A, K );
Getchar ();
}
}
2. Recursion
Recursion is a powerful tool for designing and describing algorithms. Because it is often used in the description of complex algorithms, we will discuss it before introducing other algorithm design methods.
Algorithms that can use recursive descriptions usually have the following features: To solve the n-scale problem, we try to break it down into smaller-scale problems, then, we can easily construct solutions to big problems from the solutions to these small problems, and these smaller problems can also be decomposed into smaller problems using the same decomposition and synthesis methods, then, we can deconstruct these smaller problems to solve large-scale problems. In particular, when the scale N is 1, it can be directly solved.
[Problem] compile the nth function fib (n) for calculating the Fibonacci series ).
Fibonacci series: 0, 1, 1, 2, 3 ,......, That is:
FIB (0) = 0;
FIB (1) = 1;
FIB (n) = fib (n-1) + fib (n-2) (when n> 1 ).
Recursive functions:
Int fib (int n)
{If (n = 0) return 0;
If (n = 1) return 1;
If (n> 1) return fib (n-1) + fib (n-2 );
}
The execution process of recursive algorithms is divided into two stages: recursive and regression. In the recurrence stage, the solution of more complex problems (scale N) is pushed to the solution of problems that are simpler than the original problem (scale less than N. For example, in the above example, solve fib (N) and push it to solve fib (n-1) and FIB (n-2 ). That is to say, for calculation of Fib (N), FIB (n-1) and FIB (n-2) must be calculated first, while fib (n-1) and FIB (n-2) must be calculated ), the FIB (n-3) and FIB (n-4) must be calculated first ). And so on until the calculation of Fib (1) and FIB (0) can immediately obtain results 1 and 0, respectively. In the recurrence stage, termination of recursion is required. For example, in function fib, when n is 1 and 0.
In the regression phase, after obtaining the solution of the simplest case, return the result step by step and obtain the solution of a slightly complex problem, for example, after obtaining fib (1) and FIB (0, the result of Fib (2) is returned ,......, After obtaining the results of Fib (n-1) and FIB (n-2), the results of Fib (n) are returned.
When writing recursive functions, note that the knowledge of local variables and parameters in the functions is limited to the current call layer. When recursion enters the simple problem layer, parameters and local variables at the original level are hidden. In a series of "simple question" layers, they have their own parameters and local variables.
Due to recursion, a series of function calls and a series of repeated computations may occur, the execution efficiency of recursive algorithms is relatively low. When a recursive algorithm can be easily converted into a recursive algorithm, programs are usually compiled based on the recursive algorithm. For example, for the above example, the function fib (n) of the nth item of the Fibonacci series should adopt a recursive algorithm, that is, the next item is calculated from the first two items of the Fibonacci series, until the required n items are calculated.
[Problem] combination Problem
Problem description: Find the natural numbers 1, 2 ,...... And N. For example, all the combinations of N = 5 and r = 3 are: (1) 5, 4, 3 (2) 5, 4, 2 (3) 5, 4, 1
(4) 5, 3, 2 (5) 5, 3, 1 (6) 5, 2, 1
(7) 4, 3, 2 (8) 4, 3, 1 (9) 4, 2, 1
(10) 3, 2, 1
The ten combinations listed in the analysis can be used to consider the algorithm of the composite function. Set the function to void comb (int m, int K) to find the natural numbers 1, 2 ,...... And any combination of K numbers in M. When the first number of a combination is selected, the following number is a combination of the number of K-1 from the remaining number of m-1. In this way, the combination problem of K number in M number is transformed into the combination problem of the number of K-1 in M number. Let the function introduce the working array a [] to store the numbers of the obtained combinations. The Convention function places the first number of the K numeric combinations in a [K, after a combination is obtained, a combination in a [] is output. The first number can be m m-1 ,...... , K, the function puts the first number of the definite combination into the array, there are two possible options, because the remaining elements of the combination have not been removed, continue to recursively determine; or, because all elements of the combination have been determined, the combination is output. For details, see the function comb in the following program.
[Program]
# Include <stdio. h>
# Define maxn100
Int A [maxn];
Void comb (int m, int K)
{Int I, J;
For (I = m; I> = K; I --)
{A [k] = I;
If (k> 1)
Comb (I-1, k-1 );
Else
{For (j = A [0]; j> 0; j --)
Printf ("% 4D", a [J]);
Printf ("\ n ");
}
}
}
Void main ()
{A [0] = 3;
Comb (5, 3 );
}
3. Backtracking
The backtracking method is also called the testing method. This method first gives up the limit on the size of the problem temporarily, and enumerates and tests the problem in a certain order. When it is found that the current explain solution cannot be a solution, select the next explain solution. If the current explain solution does not meet the problem scale requirements and meets all other requirements, continue to expand the scale of the current explain solution, and continue testing. If the current failover solution meets all the requirements, including the problem scale, this failover solution is a solution to the problem. In the Backtracking Method, the process of dropping the current half solution is called backtracking. Expanding the scale of the current solution is called forward testing.
[Problem] combination Problem
Problem description: Find the natural number 1, 2 ,..., N contains all the combinations of r numbers.
Use the Backtracking Method to locate the problem and save the found combination in ascending order of a [0], a [1],…, In a [r-1], the elements of the combination meet the following properties:
(1) A [I + 1]> A [I], the last digit is greater than the previous digit;
(2) A [I]-I <= N-R + 1.
The process of searching for solutions can be described as follows:
First, discard the condition that the number of combinations is R. The candidate combination starts with only one number 1. Because the solution satisfies all the conditions except the problem scale, it expands its scale and satisfies the above conditions (1). the candidate combination is changed to 1, 2. Continue this process and obtain the candidate combinations 1, 2, and 3. This solution satisfies all the conditions, including the problem scale, and thus is a solution. On the basis of this solution, select the next partial solution. Because 3 on a [2] is adjusted to 4, and 5 is adjusted to meet all the requirements of the problem, solution 1, 2, and, 4, 1, 2, 5. Since 5 cannot be adjusted any more, we need to go back from a [2] to a [1]. At this time, a [1] = 2 can be adjusted to 3 and try again, solution 1, 3, and 4 are obtained. Repeat the previous test and Backward Tracing until a [0] is used for backtracking, which indicates that all solutions to the problem have been found. Write the program as follows:
[Program]
# Define maxn100
Int A [maxn];
Void comb (int m, int R)
{Int I, J;
I = 0;
A [I] = 1;
Do {
If (A [I]-I <= m-R + 1
{If (I = R-1)
{For (j = 0; j <r; j ++)
Printf ("% 4D", a [J]);
Printf ("\ n ");
}
A [I] ++;
Continue;
}
Else
{If (I = 0)
Return;
A [-- I] ++;
}
} While (1)
}
Main ()
{Comb (5, 3 );
}
4. Greedy method
The greedy method is a method that does not pursue the optimal solution and only wants to obtain a satisfactory solution. The greedy method can quickly get a satisfactory solution, because it saves a lot of time to find the optimal solution to exhaust all possibilities. The greedy method is usually based on the current situation for optimal selection, regardless of the overall situation of various possibilities, so the greedy method should not backtrack.
For example, when purchasing and looking for money, in order to minimize the number of coins to be recovered, we do not consider all kinds of publishing schemes for finding the change, but start from the currency with the largest face value, consider each currency in a descending order. First, try to use a currency with a large nominal value. When the amount of a currency with a large nominal value is insufficient, consider the next currency with a smaller nominal value. This is the use of the greedy method. This method is always optimal here because banks cleverly arrange the types and denominations of coins they issue. For example, only coins with a nominal value of 1, 5, and 11 are expected to be recovered. According to the greedy algorithm, we should find one coin with 11 nominal values and four coins with one nominal value, and retrieve five coins in total. However, the optimal solution should be 3 Coins with 5 nominal values.
[Problem] Packing Problem
Problem description: The packing problem is described as follows: 0, 1 ,... N types of n-1 items, volume: v0, V1 ,... Vn-1. Pack the N items into several boxes with a capacity of v. It is agreed that the volume of these N items shall not exceed V, that is, for 0 ≤ I <n, there is 0 <VI ≤ v. The number of boxes required for different packing schemes may be different. The packing problem requires that the number of boxes containing the N items be less.
If we divide the set of N items into N or all subsets of N items, the optimal solution can be found. However, the total number of all possible partitions is too large. For an appropriately large N, it is unacceptable to find out all possible classifications. Therefore, we adopt a very simple Approximation Algorithm for the packing problem, that is, the greedy method. This algorithm puts items in the first box that can be placed in sequence. Although this algorithm cannot find the optimal solution, it can still find a very good solution. Without interruption, the volume of N items is sorted in ascending order, that is, V0 is ≥v1 ≥... ≥ Vn-1. If the preceding requirements are not met, you only need to first sort the N items by their volume from large to small, and then re-number the items according to the sorting result. The packing algorithm is described as follows:
{Input box volume;
Number of input items N;
Input the volume of each item in the order of size from large to small;
The pre-used box chain is empty;
The pre-configured box counter box_count is 0;
For (I = 0; I <n; I ++)
{Search for the box J that can be placed in item I in sequence from the first box in use;
If (no more items can be placed in used boxes I)
{Use another box and put item I into it;
Box_count ++;
}
Else
Put item I into the box J;
}
}
The preceding algorithm can calculate the number of boxes box_count and find the items in each box. The following example shows that the algorithm may not be able to find the optimal solution. There are 6 items in size: 60, 45, 35, 20, 20, and 20, respectively, the box volume is 100 units. Based on the above algorithm, three boxes are required. The items in each box are: 1 and 3 in the first box; 2, 4, and 5 in the second box; the third box contains 6 items. The optimal solution is two boxes with items 1, 4, 5, 2, 3, and 6 respectively.
If the items in each box are represented by a linked list, the first node pointer of the linked list is stored in a structure, and the remaining space is recorded in the structure and the first pointer of the linked list of the items in the box. In addition, the information of all boxes is also a linked list. The following is a program written based on the above algorithms.
[Program]
# Include <stdio. h>
# Include <stdlib. h>
Typedef struct ele
{Int vno;
Struct ele * link;
} Ele;
Typedef struct hnode
{Int remainder;
Ele * head;
Struct hnode * next;
} Hnode;
Void main ()
{Int N, I, box_count, box_volume, *;
Hnode * box_h, * box_t, * J;
Ele * P, * q;
Printf ("input box volume \ n ");
Scanf ("% d", & box_volume );
Printf ("Input item count \ n ");
Scanf ("% d", & N );
A = (int *) malloc (sizeof (INT) * n );
Printf ("input the volume of each item in the order of size from large to small :");
For (I = 0; I <n; I ++) scanf ("% d", A + I );
Box_h = box_t = NULL;
Box_count = 0;
For (I = 0; I <n; I ++)
{P = (Ele *) malloc (sizeof (Ele ));
P-> vno = I;
For (j = box_h; J! = NULL; j = J-> next)
If (J-> remainder> = A [I]) break;
If (j = NULL)
{J = (hnode *) malloc (sizeof (hnode ));
J-> remainder = box_volume-a [I];
J-> head = NULL;
If (box_h = NULL) box_h = box_t = J;
Else box_t = boix_t-> next = J;
J-> next = NULL;
Box_count ++;
}
Else J-> remainder-= A [I];
For (q = J-> next; Q! = NULL & Q-> link! = NULL; q = Q-> link );
If (q = NULL)
{P-> link = J-> head;
J-> head = P;
}
Else
{P-> link = NULL;
Q-> link = P;
}
}
Printf ("% d boxes used in total", box_count );
Printf ("the situations where items are packed in different boxes are as follows :");
For (j = box_h, I = 1; J! = NULL; j = J-> next, I ++)
{Printf ("only boxes in % 2D, remaining volume % 4D, loaded items; \ n", I, j-> remainder );
For (P = J-> head; P! = NULL; P = p-> link)
Printf ("% 4D", p-> vno + 1 );
Printf ("\ n ");
}
}
5. Divide and conquer Law
The computing time required for any problem that can be solved by a computer is related to its scale N. The smaller the problem, the easier it is to solve it directly, and the less computing time it takes to solve the problem. For example, for sorting of n elements, if n = 1, no calculation is required; if n = 2, the order can be sorted after a comparison; when n = 3, you only need to make three comparisons ,.... When N is large, the problem is not easy to handle. It is sometimes quite difficult to solve a large-scale problem directly.
The principle of divide and conquer law is to divide a big problem that is hard to solve directly into the same problems of small scale, so that they can be cracked and managed separately.
If the original problem can be divided into k sub-problems (1 <k ≤ n), and these sub-problems can be solved, you can use the solutions of these sub-problems to find the solution of the original problem, this method is feasible. Sub-problems produced by the Division and control method are often small models of the original problems, which provides convenience for the use of recursive technology. In this case, the sub-problem type can be consistent with the original problem type, but its scale is constantly reduced. In the end, the sub-problem can be easily solved directly. This naturally leads to the generation of recursive processes. Grouping and Recursion are like twins. They are often applied to algorithm design at the same time, and many efficient algorithms are generated.
The problems solved by the Division and control law generally have the following characteristics:
(1) The problem can be easily solved by narrowing down to a certain extent;
(2) The problem can be divided into several small-scale identical problems, that is, the problem has the optimal substructure;
(3) The solutions of subproblems decomposed by the problem can be merged into the solutions of the problem;
(4) Each subproblem identified by the problem is independent of each other, that is, the subproblem does not include a public subproblem.
The first feature above is that most problems can be met, because the computing complexity of the problem generally increases with the increase of the scale of the problem. The second feature is the premise of applying the divide and conquer method, this feature reflects the application of recursive thinking. The third feature is the key. Whether or not the divide and conquer method can be used depends on whether the problem has a third feature, if you have the first and second features, but do not have the third feature, you can consider greedy or dynamic programming. Article 4 features involve the efficiency of the Division and control law. If the sub-problems are not independent, the division and Control Law should do a lot of unnecessary work to solve public sub-problems repeatedly, at this time, although the divide and conquer method is available, it is better to use dynamic programming method.
There are three steps in recursion of each layer:
(1) decomposition: the original problem is divided into several subproblems, which are small in size and independent from each other and form the same as the original problem;
(2) solution: If the subproblem is small and easy to solve, the subproblem will be solved directly; otherwise, the subproblem will be solved recursively;
(3) Merge: Merge the solutions of each subproblem into the solutions of the original problem.
6. Dynamic Programming
Complex problems often occur. Instead of simply breaking down them into several subproblems, they may break down a series of subproblems. Simply resolve a large problem into a sub-problem, and combine the sub-problem solution to export the solution of the big problem. the time consumed for solving the problem increases in a power series according to the scale of the problem.
To reduce the time required to repeatedly find the same subproblem, an array is introduced, no matter whether they are useful for the final solution or not, to resolve all subproblems in the array, this is the basic method used by dynamic programming. The following describes how to use the dynamic planning method with examples.
[Problem] calculates the longest common character subsequence of a two-character sequence.
Problem description: The subsequence of a character sequence is a character sequence formed by removing a number of characters (either one or not) from a given Character Sequence at Will (not necessarily consecutive. Make the given character sequence X = "x0, X1 ,..., Xm-1 ", sequence y =" y0, Y1 ,..., Yk-1 is a subsequence of X, there is a strictly incrementing subscript sequence of x <I0, i1 ,..., Ik-1>, making for all J = ,..., K-1 with Xij = YJ. For example, x = "abcbdab" and Y = "bcdb" are subsequences of X.
Consider how to break down the longest common subsequence into sub-problems, set a = "A0, A1 ,..., Am-1 ", B =" B0, B1 ,..., Bm-1 ", and z =" z0, Z1 ,..., Zk-1 "is their longest common subsequence. It is not hard to prove that it has the following features:
(1) If am-1 = bn-1, then zk-1 = Am-1 = bn-1, and "z0, Z1 ,..., Zk-2 "is" A0, A1 ,..., Am-2 "and" B0, B1 ,..., A Longest Common subsequence of bn-2;
(2) If am-1! = Bn-1, if zk-1! = Am-1, containing "z0, Z1 ,..., Zk-1 "is" A0, A1 ,..., Am-2 "and" B0, B1 ,..., A Longest Common subsequence of bn-1;
(3) If am-1! = Bn-1, if zk-1! = Bn-1, contains "z0, Z1 ,..., Zk-1 "is" A0, A1 ,..., Am-1 "and" B0, B1 ,..., A Longest Common subsequence of bn-2.
In this way, in the search for a and B Public subsequences, if there is am-1 = bn-1, then further solve a subproblem, find "A0, A1 ,..., Am-2 "and" B0, B1 ,..., A Longest Common subsequence of bm-2; If am-1! = Bn-1, it is to solve two sub-problems, find out "A0, A1 ,..., Am-2 "and" B0, B1 ,..., Bn-1 "of a Longest Common subsequence and finding out" A0, A1 ,..., Am-1 "and" B0, B1 ,..., The longest common subsequence of bn-2, and the elders of the two are used as the longest common subsequence of A and B.
The Code is as follows:
# Include <stdio. h>
# Include <string. h>
# Define n 100
Char A [n], B [N], STR [N];
Int lcs_len (char * a, char * B, int C [] [N])
{Int M = strlen (A), n = strlen (B), I, J;
For (I = 0; I <= m; I ++) C [I] [0] = 0;
For (I = 0; I <= N; I ++) C [0] [I] = 0;
For (I = 1; I <= m; I ++)
For (j = 1; j <= m; j ++)
If (A [I-1] = B [J-1])
C [I] [J] = C [I-1] [J-1] + 1;
Else if (C [I-1] [J]> = C [I] [J-1])
C [I] [J] = C [I-1] [J];
Else
C [I] [J] = C [I] [J-1];
Return C [m] [N];
}
Char * buile_lcs (char s [], char * a, char * B)
{Int K, I = strlen (A), j = strlen (B );
K = lcs_len (A, B, C );
S [k] = '\ 0 ';
While (k> 0)
If (C [I] [J] = C [I-1] [J]) I --;
Else if (C [I] [J] = C [I] [J-1]) j --;
Else {s [-- K] = A [I-1];
I --; j --;
}
Return S;
}
Void main ()
{Printf ("enter two string (<% d )! \ N ", N );
Scanf ("% S % s", a, B );
Printf ("LCS = % s \ n", build_lcs (STR, a, B ));
}
7. Iterative Method
Iteration Method is a common algorithm design method used to obtain the approximate root of equations or equations. Set the equation to f (x) = 0, use a mathematical method to export the equivalent form X = g (x), and then follow these steps:
(1) Select the approximate root of an equation and assign it to variable x0;
(2) Save the value of x0 to the variable X1, calculate g (X1), and save the result to the variable x0;
(3) when the absolute value of the difference between x0 and X1 is smaller than the specified precision requirement, repeat the calculation in step (2.
If the equation has roots and the approximate root sequence obtained by the above method converges, The x0 obtained by the above method is considered as the root of the equation. The above algorithm is represented in the form of a C program:
The procedure is as follows:
[Algorithm] iterative method is used to obtain the root of the equations.
{For (I = 0; I <n; I ++)
X [I] = initial approximate root;
Do {
For (I = 0; I <n; I ++)
Y [I] = x [I];
For (I = 0; I <n; I ++)
X [I] = GI (X );
For (delta = 0.0, I = 0; I <n; I ++)
If (FABS (Y [I]-X [I])> delta) Delta = FABS (Y [I]-X [I]);} while (delta> epsilon );
For (I = 0; I <n; I ++)
Printf ("the approximate root of variable X [% d] is % F", I, X [I]);
Printf ("\ n ");
} Pay attention to the following two possible situations when using iterative methods to root out:
(1) If the equation has no solution, the approximate root sequence obtained by the algorithm will not converge, and the iteration process will become an endless loop. Therefore, before using the iterative algorithm, we should first check whether the equation has a solution, limit the number of iterations in the program;
(2) Although the equation has solutions, improper selection of iteration formulas or unreasonable selection of the initial approximate root of iteration may also lead to iteration failure.
8. exhaustive search
The exhaustive search method is used to enumerate and test the number of partial solutions that may be the solutions one by one in a certain order, and identify the compliant partial solutions from the crowd as the solution to the problem.
[Problem] The six variables A, B, C, D, E, and F are arranged in a triangle. These six variables are integers on [1, 6], respectively, and are not the same. Evaluate all solutions that make the sum of the variables on the three sides of a triangle equal. Is a solution.
The Program introduces variables A, B, C, D, E, and F, and asks them to take integers ranging from 1 to 6 in order, test whether the sum of the variables on the three sides of the triangle arranged by them is equal. If they are equal, they are arranged to meet the requirements and output them. After all these variables are combined, the program can obtain all possible solutions. The procedure is as follows:
# Include <stdio. h>
Void main ()
{Int A, B, C, D, E, F;
For (A = 1; A <= 6; A ++ ){
For (B = 1; B <= 6; B ++ ){
If (B = A) continue;
For (C = 1; C <= 6; C ++ ){
If (C = A) | (C = B) continue;
For (D = 1; D <= 6; D ++ ){
If (D = A) | (D = B) | (D = c) continue;
For (E = 1; E <= 6; E ++ ){
If (E = A) | (E = B) | (E = C) | (E = d) continue;
F = 21-(A + B + C + D + E );
If (A + B + C = C + D + E) & (A + B + C = e + F + )){
Printf ("% 6d, );
Printf ("% 4D % 4D", B, F );
Printf ("% 2D % 4D % 4D", c, d, e );
Scanf ("% * C ");
}
}
}
}
}
}}
Programs compiled by the exhaustive method are usually unable to adapt to changes. If the problem is changed to that where nine variables are arranged in a triangle and each side has four variables, the program's cyclic weight will change accordingly.