After reading programming pearls second edition
Summary and application of personal practices.
1. The main problem for programmers is not necessarily technical, but also psychological:
He is trying to solve a wrong problem, so he cannot make progress.
By breaking the conceptual barriers, we can solve a simpler problem.
We finally solved the problem.
2. "The more common the problem is, the easier it will be to solve it." for programming,
This means to directly solve a problem in 23 cases, which is better than writing
General Program for n cases, and then apply the program to the case where n = 23
More difficult.
3. Code development is top-down (beginning with the general concept, and then refined)
To a line of code), but the correctness analysis is from bottom to bottom: we will
Start from individual code lines and study how they work together to solve the problem.
When debugging, modifying code, or making incorrect asserted statements, you must fully understand
Code to defend against the "change the code, as long as it can make her run"
Impulsive.
4. Maintaining code simplicity is usually the key to correctness.
Asserted comments {} pseudo code example (analyzed by the author ):
(Assertion: the relationship between input, program variables, and output describes the program's
State; Assertions allow programmers to precisely describe these relationships .)
David gries's coffee pot problem. At first, I provided you with some costumes.
Coffee Pots for black beans and white beans and a bunch of extra black beans. Then you repeat
Follow these steps until only one bean is left in the pot:
Randomly select two beans from the pot. If they are the same color, they will be thrown
Drop and put a black bean in the tank. If they are of different colors
Put the white beans back into the jar and discard the black beans.
Verify that the process is terminated. When black beans and white beans exist in the first jar,
Can you tell the color of the last bean in the jar?
M: The number of black beans in the jar.
N: The number of white beans in the jar.
The abbreviated form samecolor () indicates that the two beans are the same color from the tank.
M = number of black beans
N = number of white beans
{(M + n)> 0}
Loop
If (m + n) = 1
{M = 1 | n = 1}
If M = 1
{M = 1}
Color = block; break
Else
{N = 1}
Color = whrite; break
Case
Samecolor () = Black:
{Samecolor () = Black & M> = 2}
M = m-1;
{(M + n)> = 1}
Samecolor () = whrite:
{Samecolor () = whrite & n> = 2}
N = n-2;
{N> = 0}
M = m + 1;
{(M + n)> = 1}
Samecolor () = false:
{Samecolor () = false & n> = 1 & M> = 1}
M = S-1
{(M + n)> = 1}
{(M + n)> = 1}
Analysis: When the loop ends, color will be assigned a value. In case
The total number of beans in the sub-table (m + n) is reduced by one. At the beginning, the number (m + n) is greater than 0, so
There must be one remaining bean in the jar, M = 1 or N = 1, which can terminate the program;
5. Simple and powerful programs can make users happy and won't make the program build
This is the ultimate goal of programmers.
6. When calculating the performance of the real-time software system, we must calculate the performance according to the coefficients of 2, 4 or 6.
Reduces performance to compensate us for ignorance. For reliability/availability commitment,
We should retain a factor of 10 for what we think can be met to compensate
Our ignorance. When estimating ghosts, costs, and progress. We should keep 2 or 4
To make up for our shortcomings in a certain aspect.
Einstein said, "Everything should be as simple as possible, unless there is no simpler
Single"
7. Divide and conquer: To solve the problem of N scale, we can recursively solve the problem of N/2 scale.
And then combine their answers to obtain the answer to the entire question.
8. An example of Binary Search Algorithm Optimization:
Algorithm 1:
L = 0; u = n-1
Loop
/* Find the location of T in X [L .. u */
If l> U
P =-1; break file: // the query is finished. t does not exist in X [L .. u ].
M = (L + u)/2 file: // binary X [L .. u]
Case
X [m] <t: L = m + 1
X [m] = T: P = m; break
X [m]> T: U = m-1
Algorithm 2: (optimized algorithm)
L =-1; u = N
While l + 1! = U
M = (L + u)/2
If X [m] <t
L = m
Else
U = m
P = u
If P> = n | x [p]! = T
P =-1 file: // t does not exist in X [L .. u]
Note:
Both algorithm 1 and algorithm 2 complete the binary search function. If t exists in X [L .. u ],
Multiple times, the location returned by algorithm 1 may be one of them, but algorithm 2 returns
Back to the first position. Although algorithm 2 seems more difficult than algorithm 2, algorithm 2 is more
Efficiency, because the algorithm may compare t more than twice.
Method 2 only needs to be compared once.
9. code optimization principle: Use code optimization as little as possible; many other attributes and efficiency of the software
Important, or even more important; before optimizing the code, we should ensure that other methods
No more effective solutions are provided.
10. My own optimization example:
When you store an embedded device, the configuration vendor provides the dynamic library
Including the complete rewriting of the configuration storage area, and the addition and deletion of a single record.
Test results show that the time required for a single operation is basically the same as that for a full rewrite,
Or even more. Originally, the storage medium is falsh, and all operations are performed first.
In addition, write again. Therefore, a single operation is slower than a full rewrite, and the operation must be stable.
Yes. In this way, I have never used a single operation call, but all of them are completely rewritten.
Improved efficiency. That is to say, sometimes simple operations are not necessarily efficient,
It may be the result of organizing some complex operations. Just like, if % is used, it is better
If and subtraction are much slower.
10. Reducing space usually results in a reasonable side effect of running time: the smaller the program
The faster the waiting time, the more likely it is to be filled into the cache; the less data that needs to be operated
The less time it usually takes. The time required to transmit data across networks is usually
Is proportional to the data size.
The key to space compression is simplicity, which can produce functionality, robustness, and speed.
And space;