This may be one of the biggest mistakes of many oier.
You will often see "how to do this? Isn't this an NP problem?", "this is only a search. It has already proved to be an NP problem. You need to know that the NP problem mentioned by most people at this time actually refers to the NPC problem. They do not understand the concept of NP and NPC. NP is not the problem of "only search is available. Okay. Now, basically this misunderstanding has been clarified. The following content is about P, NP, and NPC. If you are not very interested, you can skip this article. Next, you can see how big a mistake the NP problem is.
Let's briefly describe the time complexity in a few words. The time complexity does not mean how long it takes for a program to solve the problem, but how quickly the program needs to grow when the problem grows. That is to say, for computers with high-speed data processing, the efficiency of processing a specific data cannot be measured in the quality of a program. However, when the data size increases to hundreds of times, whether the program running time is the same, or it is slowed down by hundreds of times or tens of thousands of times. No matter how big the data is, the time it takes for the program to process is always so much. We can say that this program is very good and has the time complexity of O (1), also known as the constant level complexity; the size of the data becomes large and the time spent is also long. the time complexity of this program is O (n), for example, finding the maximum value of N; for example, Bubble sorting, insertion sorting, and so on, which doubles the data size and slows down the time by 4 times, are the complexity of O (N ^ 2. There are also some exhaustive algorithms that take longer to increase in geometric order. This is the exponential complexity of O (a ^ N), or even O (N !) The factorial complexity. There will be no O (2 * n ^ 2) Complexity, because the previous "2" is a coefficient, it will not affect the time growth of the entire program. Similarly, the complexity of O (N ^ 3 + N ^ 2) is the complexity of O (N ^ 3. Therefore, we will say that an O (0.01 * n ^ 3) program is less efficient than O (100 * n ^ 2), even when n is very small, the former is better than the latter, but the latter's time grows slowly with the data scale, and the final O (N ^ 3) complexity will far exceed O (N ^ 2 ). We also say that the complexity of O (N ^ 100) is less than that of O (1.01 ^ N.
It is easy to see that the previous several types of complexity are divided into two levels. The latter is much more complex than the former in any case: the other is O (1 ), O (log (n), O (N ^ A), etc. We call it a polynomial level of complexity, because its scale N appears at the bottom number; the other is O (a ^ N) and O (N !) Type complexity. It is non-polynomial level, and its complexity is often unacceptable to computers. When we solve a problem, the algorithms we choose usually need to be Polynomial-level complexity. Non-Polynomial-level complexity requires too much time and often times out, unless the data size is very small.
Naturally, people will think of the question: will all the problems be able to find Polynomial-level algorithms with complexity? Unfortunately, the answer is no. Some problems cannot even find a correct algorithm, which is called "undecidable demo-problem ). The halting problem is a well-known unsolved problem, which has been specially introduced and proved on my MSN space. For example, the N numbers from 1 to n are all arranged. No matter what method you use, your complexity is a factorial level, because you have to print the result using the factorial level time. Some people say that such a "problem" is not a "formal" problem. The formal problem is to let the program solve a problem, output a "yes" or "no" (this is called a determining problem), or an optimal value (this is called an optimization problem ). According to this definition, we can also cite a problem that is unlikely to have a polynomial-Level Algorithm: Hamilton loop. The problem is: Give You A graph and ask if you can find one that passes through each vertex exactly once (no omission or repetition) finally, let's go back (the path that meets this condition is called the Hamilton loop ). We have not found any Polynomial-Level Algorithm for this problem. In fact, this issue is the issue of the NPC we will talk about later.
The following introduces the concept of p-type problem: if a problem can find an algorithm that can solve it in polynomial time, then this problem is a p problem. P is the first letter of the polynomial of an English word. What are P problems? Generally, Noi and noip do not have issues other than P. Some of the information we have frequently encountered is the question of P. The truth is very simple. A non-Polynomial-level time-out program that uses exhaustion won't cover any valuable algorithms.
Next we will introduce the concept of NP. This is a bit difficult to understand, or easy to understand. Here I emphasize (back to the misunderstanding I tried to clarify) that the NP issue is not a non-P issue. NP is a problem that can be solved in polynomial time. Another definition of NP problems is that we can guess a problem in polynomial time. For example, I have a good RP. When the program needs to enumerate, I can guess it. Now someone has obtained a question about finding the shortest path and asked whether there is a route of less than 100 units in length from the start point to the end point. It draws a picture based on the data, but it cannot be calculated. So I am asked: how do you choose the least route? I said, My RP is very good, and I will be able to show you a very short way. Then I drew several lines randomly. Let's just say this one. The person added the weights according to what I meant. Hey, God, the path length is 98, smaller than 100. As a result, there is a path that is smaller than 100. If someone asks him how to solve this problem, he can say, because I found a solution smaller than 100. In this question, it is very difficult to find a solution, but it is easy to verify a solution. It takes only O (n) Time to verify a solution. That is to say, I can spend O (n) Time to add the length of the path I guess. So, as long as I had a good RP and guessed it, I would be able to solve this problem in polynomial time. I guess the solution is always the best, and the solution that does not satisfy the question won't lie to me in choosing it. This is the NP problem. Of course there are not NP problems, that is, you can guess it, but it is useless, because you cannot verify it in polynomial time. The example below is a classic example. It points out that there is no way to verify a solution in polynomial time. Obviously, the Hamilton loop mentioned above is an NP problem, because it is very easy to verify whether a path exactly passes through every vertex. But I want to replace it with this: Try to see if there is no Hamilton loop in a graph. In this way, the problem cannot be verified in polynomial time, because unless you have tried all the paths, you cannot determine that it "has no Hamilton loop ".
The reason for defining NP problems is that only NP problems can be used to find polynomial algorithms. We will not expect a polynomial to verify that a solution is not feasible. There is a polynomial-level algorithm to solve it. I believe that readers will soon understand that the "NP problem", the most difficult question in informatics, is actually exploring the relationship between NP problems and P problems.
Obviously, all P problems are NP problems. That is to say, to solve a problem with polynomials, we must be able to verify the solution of a problem with polynomials. Since all the positive solutions come out, we only need to compare them to verify any given solutions. The key is that people want to know whether all NP problems are P problems. We can use a set of points of view to illustrate. If we classify all P-type problems into one set P and divide all NP problems into another set NP, then obviously P belongs to NP. Now, all the research on NP problems is focused on one issue, that is, is there any p = NP? The so-called "NP problem" is actually a sentence: prove or overturn P = NP.
NP problems have always been the peak of informatics. The peak is eye-catching but hard to solve. In the research of informatics, this is an ultimate problem that has taken a lot of time and energy and has not been solved. It is like the gedebach conjecture in the major concepts of physics and mathematics.
So far, this problem is still "inactive ". However, there is a general trend and a general direction. It is widely believed that p = NP is not true. That is to say, most people believe that there is at least one NP problem that cannot have Polynomial-level complexity algorithms. People believe that p = NP is due to the fact that, in the process of studying the NP problem, a very special NP problem is identified, called the NP-completeness problem. C is the first letter of the English word "completely. It is the existence of the NPC problem that makes people believe in P = NP. We will spend a lot of time explaining the NPC issue. You can see how incredible the NPC problem makes P = NP.
To illustrate the NPC issue, we should first introduce a concept called contract ").
Simply put, question A can be reduced to question B, that is, Problem B can be used to solve question a, or problem A can be changed to question B. This is an example in introduction to algorithms. For example, there are two problems: solving a one-dimensional one-time equation and solving a one-dimensional quadratic equation. So we can say that the former can be reduced to the latter, that is, knowing how to solve a one-dimensional quadratic equation will certainly be able to solve a one-dimensional equation. We can write two programs that correspond to two problems respectively, so we can find a "rule" and follow this rule to change the input data of the one-dimensional equation program, in the program for solving the quadratic equation of a single element, the two procedures always get the same result. This rule is: the corresponding coefficient of the two equations remains unchanged, and the quadratic coefficient of the quadratic equation is 0. According to this rule, the previous problem is converted into the next problem, and the two problems are equivalent. Similarly, we can say that the Hamilton loop can be reduced to the traveling
Salesman Problem, Traveling Salesman Problem): In the Hamilton loop problem, the distance between the two points is 0. If the two points are not directly connected, the distance is 1, so the problem is converted to whether there is a path with a length of 0 in the TSP problem. The Hamilton loop exists only when there is a zero length loop in the TSP problem.
"Question A can be reduced to question B" has an important intuitive significance: the time complexity of B is higher than or equal to the time complexity of. That is to say, question a is no more difficult than question B. This is easy to understand. Since problem A can be solved by Problem B, if the time complexity of B is lower than that of A, the algorithm of A can be improved to B, the time complexity of the two is the same. Just as it is more difficult to solve a quadratic equation than to solve a one-dimensional equation, the solution to the former can be used to solve the latter.
Obviously, reduction has an important nature: reduction has the transmission property. If question A can be reduced to question B and question B can be reduced to question C, question a must be reduced to question C. This principle is very simple and you don't have to explain it.
Now let's take a look at the standard concept of reduction. It's easy to understand: If we can find such a rule of change, we can input any program, all can be transformed into the input of program B according to this rule, so that the output of the two programs is the same, so we can say that problem A can be reduced to Problem B.
Of course, what we call "atomicity" refers to Polynomial-time atomicity ), that is, the transformation input method can be completed in polynomial time. The reduction process is meaningful only when it is completed in polynomial time.
Well, from the definition of reduction, we can see that one problem is reduced to another, the time complexity increases, and the application scope of the problem increases. Through continuous reduction of some problems, we can constantly look for algorithms with higher complexity but wider application scope to replace algorithms with lower complexity, but they can only be used for very small types of problems. Let's look back at the P and NP problems mentioned above, and think about the transfer of reduction. Naturally, we will ask, if we continue to contract, constantly finding a slightly complex big NP problem that can "Get Through" a number of small NP problems, is it possible to find a time with the highest complexity, and is it a super NP problem that can "eat" All NP problems? The answer is yes. That is to say, there is such an NP problem that all NP problems can be reduced to it. In other words, as long as this problem is solved, all NP problems are solved. The existence of such problems is incredible, and even more incredible is that there are more than one such problem. There are many such problems. This type of problem is the legendary NPC problem, that is, the NP-completeness problem. The emergence of the NPC problem has led to a leap in the research of the entire NP problem. We have reason to believe that the NPC problem is the most complicated. Back to the beginning of the full text, we can see that when people want to express a problem that does not have a polynomial efficient algorithm, they should say that it is "an NPC problem ". At this point, my goal has finally been achieved, and I have made a difference between the NP problem and the NPC problem. So far, this article has written nearly 5000 words. I admire you for coming here and writing it here.
The definition of the NPC problem is very simple. The problem that meets the following two conditions is the NPC problem. First, it must be an NP problem. Then, all NP problems can be reduced to it. It turns out that the NPC problem is also very simple. First, it is proved that it is at least an NP problem, and then it proves that one of the known NPC problems can be reduced to it (the second article defined by the NPC problem can also be satisfied by the reduced transmitter; as for how the First NPC problem came about, I will introduce it later). In this way, it can be said that it is an NPC problem.
Since all NP problems can be reduced to NPC problems, as long as any NPC problem finds a polynomial algorithm, all NP problems can be solved using this algorithm, NP is equal to P. Therefore, it is incredible to find a polynomial algorithm for NPC. Therefore, the previous article said, "It is the existence of the NPC problem that makes people believe in P = NP ". We can intuitively understand that there is no effective polynomial algorithm for the NPC problem, and we can only search by exponential or even factorial complexity.
By the way, we will discuss the NP-hard problem. The NP-hard problem is such a problem. It satisfies the second problem defined by the NPC but does not have to satisfy the first one (that is, the NP-hard problem is wider than the NPC problem ). NP-hard is also difficult to find polynomial algorithms, but it is not included in our research because it is not necessarily an NP problem. Even if a polynomial-level algorithm is discovered for the NPC, the NP-hard problem may still be unable to obtain a polynomial-level algorithm. In fact, given that NP-hard has relaxed restrictions, it may be more time-complex than all NPC problems and thus more difficult to solve.
Do not think that the issue of NPC is an empty discussion. The NPC problem exists. There is indeed a very specific issue of the NPC. It will be introduced later.
The following describes the logic circuit problems. This is the first NPC problem. Other NPC problems are caused by this problem. Therefore, the logical circuit problem is the originator of the NPC problem ".
A logical circuit problem refers to a problem where, given a logical circuit, the question is whether there is an input to make the Output True.
What is logical circuit? A logical circuit consists of several inputs, one output, several "logic gates", and dense wires. Let's take a look at the following example. You will understand it immediately without any explanation.
── Accept
│ Input 1 ─ → ┐ ── ┐
── ┘ └ ── → ┤ │
│ Or province → ─ hour
── ┐ ┌ ── → ┤ │ ── ┐
│ Input 2 bytes → bytes │
── ┘ │ ┌ → Between and between ── → output
└ ── ─ ┘ ┌ → ┤ │
── ┐ ┌ ── ┐ │ └ ┘
│ Input 3 bytes → does not exist → ── ─ accept
──-┘ ── ┘
This is a simple logical circuit. When input 1, input 2, and input 3 are true, true, false, false, true, or false, the output is true.
Is there a logical circuit that outputs true in any case? Yes. The following is a simple example.
── Accept
│ Input 1 route → ─ ACCEPT ── accept
── ┘ └ ── → ┤ │
│ And then-→ else
─ ── → ┤ │
│ ── ┘ │ ── ┐
│ Sampled → sampled │
── ┐ │ And ── → output
│ Input 2 bytes → ─ ┤ ┌ ── ┐ ┌ → ┤ │
── ┘ └ → ┤ Not ├ → ── ┘
── Done
In the above logic circuit, no matter what the input is, the output is false. In this case, the logical circuit does not have a set of inputs that make the Output True.
Return to the preceding section, specify a logical circuit and ask if there is an input that sets the output to true, which is a logical circuit problem.
The logic circuit problem is an NPC problem. This is strictly proven. It is obviously an NP problem, and it can prove that all NP problems can be reduced to it directly (do not think that there are infinite NP problems that will cause insurmountable difficulties ). The process of proof is quite complex, it probably means that the input and output of any NP problem can be converted into the input and output of the logic circuit (think about the computation of 0 and 1 in the computer ), therefore, for an NP problem, the problem is converted to an input (a feasible solution) that satisfies the true result ).
With the first NPC problem, a lot of NPC problems have emerged, because to prove a new NPC problem, you only need to contract a known NPC problem to it. Later, the Hamilton Circuit became an NPC problem, and the TSP problem also became an NPC problem. Now it has been proved that there are many NPC problems. If any one finds a polynomial algorithm, all NP problems can be solved perfectly. Therefore, P = NP is incredible because of the existence of the NPC problem. P = NP has many interesting things that need to be further explored. The ultimate goal of our generation is to climb the peak of this informatics field. What we need to do now is at least not to confuse concepts. -------------------------------------------------------------------------- 1. Problems with solutions but no algorithms:
For example, whether there are 1 million consecutive zeros after the decimal point of pi. Because pi is an objective real number, the value of PI is definite, so the solution to this problem also exists. Either yes or no. Although we don't know what he is, he exists objectively and does not change with time and does not change with people's understanding. But there is no algorithm to calculate the answer to this question. Of course, a benzene solution can be used to solve this problem, that is, the value after the decimal point of PI is constantly calculated. If 1 million consecutive zeros are found, the answer to this question is yes, but if we do not find it, we must continue to calculate it and never stop it ~~, Therefore, this benzene method cannot be called an algorithm because it does not meet the conditions for the algorithm to terminate in a limited step. Therefore, there is no Algorithm for this problem (at least for the time being, it may be possible to find a method from number theory to determine whether there are consecutive k zeros after the decimal point, or calculate the distribution of the values after the decimal point of PI from the perspective of probability, and so on ).
2. No solution or algorithm problems:
For example, is there an algorithm for determining whether a proposition is true or false? This is the famous Turing shutdown problem. If this algorithm exists, we can find it once and for all. In the future, no matter what new proposition we get, we can use this algorithm to verify it, immediately we know that this proposition is true or false, so that we can grasp the ultimate truth of the universe :). However, Turing has proved that such an algorithm does not exist and there is no solution to this problem. (The Kantor diagonal deletion method is used to prove the diagonal deletion method of real numbers and natural numbers)
3. Computable and non-computable:
According to the Turing-qiu Qi thesis ,:
1. The problem of computing is that it can be computed by the Turing machine. (Turing's definition)
2. The problem of computing is that the Lamda algorithm system can be used for computing. (Qiu Qi's definition)
Turing Qiu Qi's topic is not so much a theorem as an algorithm definition. Because an algorithm itself is an inaccurate concept, there has been no definite definition of what it is. While the Turing-qiu Qi thesis gives the form definition of algorithms in mathematics.
Turing said: all the problems of Turing function computing are algorithms (I .e., computable), and all the problems of algorithms can be computed using Turing machines. This topic cannot be proved by itself. Like the law of constant speed of light in physics, it is a natural law that cannot be proved logically and can only be tested by experiments. However, at present, the Turing proposition remains the same as the speed of light and can withstand the historical and time tests. Even now, even if it has developed into a quantum computer, it still does not get rid of the limitations of the Turing machine, computing on quantum computers is also a problem of computing on ordinary Turing machines, but the computing efficiency is different.
Two examples of non-computing problems have been mentioned earlier. One is Pi and the other is turing shutdown.
4. Verifiable and non-verifiable
In a kilometer system, there are several kilometers and Some derivation rules. In the system, the theorem is derived from the principle and the new theorem is derived using these rules. If we can finally obtain the proposition we need to prove, the proposition is true. If we finally obtain a proposition that is contrary to the one we need to prove, the one we want to prove is false.
If we look at all the theorems in the system as the nodes in the plot, let's assume from the theorem I1, I2 ,.. ik can deduce theorem j According to system rules ,... ik connects a directed edge to J. In this way, the entire system is constructed into a directed graph. In fact, the process of proving the theorem is to construct a "Proof Tree" to reach the target proposition node from the nodes expressed by the principle ". Therefore, the proof of theorem is similar to the path search in graph theory (BTW, which is the basic principle of automated proof of theorem ).
At the age of 25, the super genius geder proposed the famous geder's incompleteness theorem. This Theorem points out that the system is incomplete because of contradictions in any public physical system.
There is a conflict, that is, it can prove that proposition A is true, or it can prove that proposition A's negative proposition is true.
Incomplete means that there are some propositions in the system, which cannot be proved to be true or not. It seems that there are some isolated points in a graph, and these isolated points will never be accessible from the basic principle nodes.
Geder created a theorem that cannot be proved to be true or false in the Process of proof of the Incompleteness Theorem. It is quite troublesome to say. I simplified it into the following simple form based on my own understanding:
Proposition A = "Proposition A is not true"
Now I want to ask if proposition A is true. If proposition A is true, then according to the content of Proposition A, Proposition A should not be true; If proposition A is not true, then according to the content of Proposition A, Proposition A should be true again.
This example is not rigorous because it actually obfuscated the syntax and semantic layers. However, I think this example can be used as a simplified version of the geder example. The Goethe example is much more rigorous and complex than this one, but it is essentially similar and uses the paradox in logic.
The solution advocated by Russell and others to this paradox is to divide the predicate logic into different levels, thus generating first-order predicate logic and second-order predicate logic. As in the above example, Russell believes that the content of Proposition A describes the nature of Proposition A, which is beyond the scope that proposition A can express. He believes that such a is not a legal proposition.