The so-called imperative programming, is the command-oriented, to the machine to provide one after another command sequence to make it intact execution. The efficiency of program execution depends on the number of execution commands. Therefore, large o notation and so on represent the time space complexity of the symbol.
Functional languages are not "programmed by function transformations" in the usual sense. Notice that there is no variable in pure functional language (nothing can be changed, all things are immutable after definition), so what is the benefit of such a thing? For example, if everything is immutable, how do we program it?
In fact, what we build in functional programming is the relationship between entities and entities. In this sense, Lisp is not purely functional programming, but it is also a functional programming member. With this definition, most scripting languages that provide native list support can also be mixed with functional language functions, but this is not the essence of functional languages. Know it, but also know the reason why. Why do we need functional programming now that we have a precise and natural command-type programming? Let's give a small example.
int fab (int n) {
return n = = 1 | | n = = 2? 1:fab (n-1) + fab (n-2);
}
This is the program written in C that asks for the nth item of the Fibonacci sequence, and the corresponding Haskell code is this:
Fab:: (Num a) = a
Fab n = if n = = 1 | | n = = 2 then 1 else fab (n-1) + fab (n-2)
Looks pretty much right? But the two programs have a difference in the efficiency of implementation. Why is it? The C language is the standard imperative programming language. So for each line of statements that you write, the C program is mechanically executed intact. If you want to improve efficiency, you have to analyze the program yourself to manually reduce the number of statements executed in the program. specifically to this C program, we note that each time a function call occurs, two new function calls are generated. At this point, the actual number of function calls generated is the number of levels of the other! For example, we write Fab (5) and the actual execution results are:
Fab (5)
Fab (4)
Fab (3)
Fab (2)
Fab (1)
Fab (2)
Fab (3)
Fab (2)
Fab (1)
We see that fab (3) is evaluated two times. To calculate the Fab (5), we actually performed 8 function calls.
What about functional language? As we have said, there are no variables in functional languages. In other words, everything is the same. So at the time the Fab (5) is executed, the process is this:
Fab (5)
Fab (4)
Fab (3)
Fab (2)
Fab (1)
Fab (3)
There are only five applications in total. Note that I'm talking about an app, not a call. Because the function in a functional language is not intended to be a "call" or "Execute subroutine" in an imperative language, it is the meaning of "the relationship between function and function". For example, the two-time fab application in the FAB function actually shows that to calculate the Fab function, the subsequent two fab functions must be evaluated first. This does not exist for the calling process. Because all of the calculations are static. Haskell can assume that all fab is known. So actually all of the FAB functions encountered, Haskell just actually computes once and then caches the results.
Essentially, this means that the program we provide to a functional language is not actually a line of "commands", but just a description of the data transformation. This allows functional languages to drill down into these instructions, looking for the commonality of redundancy in these instructions for optimization. This is the secret that functional languages do not need to be carefully designed to be efficient than imperative languages. Imperative languages can certainly be optimized, but because imperative languages have boundary effects. And most of the time, the boundary effect is used to calculate, so it is difficult to popularize this optimization, only a few of the peephole optimization can achieve results.
In this case, because essentially two of our fab applications overlap. Haskell discovered this feature and then cached the results of the two fab (note that the necessary condition for caching the result is that the value returned by the function will not change!) This is the main feature of the functional language). If the subsequent calculations need to use the results of these two fab, you do not need to repeat the calculation, but simply extract the results directly. This is the main reason why the efficiency of the two programs that are almost exactly the same as above is so large.
Functional language has this advantage, then functional language is there a flaw? Of course there is. Functional languages are less pure than imperative languages. Corresponds to machine one by one, which in some cases leads to poorer efficiency and lower development efficiency. A deeper understanding of the computational model will shorten the gap between the two. However, it is important to note that imperative language is rooted in the von Neumann system, once the new system has revolutionized. Then the imperative language is no longer applicable, but only through the simulation of the method of execution, at that time, the status of functional language and imperative language will be completely reversed, of course, this is not our current need to consider the problem, but it is very important to understand a little bit of functional language programming ideas.
Functional programming and imperative programming