Discovering functional programming from function literals
Copyright Notice: This article is written by me and published in the second half of March 2015, "Programmer" magazine, the original title "from the literal Discovery functional Programming", this article copyright "Programmer" magazine all, without permission not reproduced.
Introduction
I believe that many programmers, like me, who first approached functional programming, were confused and puzzled about the concept of "function literals". With deep learning, after a clear understanding of the concept, I have done some combing and backtracking, as a function of programming ideas to extend to the most basic language elements, I deeply think that the concept of "function literal" refers to the core intention of functional programming and philosophy. So I want to discuss the motive and intention of the functional programming thought behind it with the function literal as a point of entry and observation angle.
function literal: What is the birth of?
For the concept of "function literals", I am personally inclined to explain this: it is a "type" declaration based on a class of functions, a "value" (or an instance) of the type of the function that is written inline (in-line)! Just as we define a normal variable "var num:int=1" in Scala, the number "1" is an int "literal", which represents the "value" of an int type. In the definition of "inline (in-line)", I think perhaps the word "in-place" is more in place, it is not written in the ordinary function definition, but in the need to provide a function of the "value" (also the literal) when it is written, as we re-assign a new value to the variable num "2", The number "2" is just as handy as writing. In fact, all the "quantity" or "literal" is inline, is handy to write on, here the reason to emphasize "inline" is to and traditional function definition form to differentiate! We'll mention that later.
Traced to what causes the function literal to be produced? In the traditional non-functional programming language, function is a function, is a code structure containing the function name, parameter list, return value and function body, there is no function "type" and "value" (literal), because the two are fused together in the traditional function definition form, function definition is both function " The type "description is also the value of the function"! But when we go into the category of functional programming languages, something "substantial" has changed, the position of the function has been raised to an unprecedented height, it has become a class (first-class), and it has become completely consistent with the way other data types are used. A typical example is that we can define a function in a functional programming language like defining a normal variable (we'll see a complete example later), and we know that the definition of a variable is made up of variable name, type, and value, so for a function, "type" and "value" The segmentation is unavoidable. Let's not think about why we have to define and use functions this way, a fait accompli is that the function starts with a "value", the so-called "function literal".
The "type" and "value" of the function: Where does it come from?
It is difficult to understand only the concept of "function literal", and if we can understand how the function "type" and "value" are produced, it will help us to deepen our understanding of the concept, and also to know more accurately the "function" in functional programming.
Let's talk about what the "type" of a function is and how to describe it. To be honest, this problem is not difficult to solve, because the type of the function "characteristic" is obviously reflected in its parameters and the type of the return value, if we extract these types from the function definition, and then according to the form of function signatures, I believe most people can easily understand and accept this description method, And in Scala, that's exactly what you do. Let's take a look at a traditional form of function definition written in Scala:
def PlusOne (num:int):Int = {num+1 }
This function returns the incoming int parameter plus 1, and according to the type extraction principle we mentioned earlier, the "type" of the function can be described as:
(Int)= = Int
As a whole, we try to keep the original shape of the function, so the parentheses are still present, indicating the argument list (in Scala, when there is only one argument, the parentheses can be omitted), which includes the types of each parameter, and whether multiple parameter types are separated by commas. The right arrow is the split between the argument list and the return value, and the right side of the arrow is the type of the return value. This "type" describes the "one class" function (remember, it's a function rather than a function, which is important, and we'll discuss what the difference is later on): they take an int type parameter and return the result of an int type. This descriptive method looks accurate, image, and perfectly acceptable, so this is exactly what Scala describes as the function type's syntax.
The next question is how to describe the "value" of a function. As we mentioned earlier, the traditional form of function definition is a mixture of function "type" and "value", and if we remove the part of the type declaration from the function definition according to the previous method of extracting the function type, then the rest of nature is about the "value" part. Is still the example function given earlier, according to this idea, its "value" should be such a piece of code:
(num)={num+1 }
The only tweak we've made in this code is to replace the "=" between the argument list and the function body in the function definition with "=", yes, in order to cater to Scala's syntactic requirements, and this form echoes the previous type declaration form, so the description of the function "value" can be determined in this way.
Finally, let's combine the "type" and "value" of a function to see how a function is defined as a normal variable:
PlusOne (num)=num+1 }
In this example, we define a variable (exactly a variable that does not allow two assignments (single assignment variable)): PlusOne, "Int=>int" is the type of the variable, it is obvious that it refers to a class of functions, The right side of the equals sign is the value of the variable, which is a function literal. Variable PlusOne is exactly the same as the previously defined function PlusOne, and from a certain point of view, the existence of this function of variable plusone is the most essential cognition of functional programming language. This form of existence confirms that functional programming regards functions as a class (first-class) creed, since it is no different from the definition to the use of other types and values.
Transfer functions like transfer values: A "fission" type of energy release
But our story is far from over, and if this separation only allows us to use functions just like other data types, then I really don't see the value of such a big fuss. If we pre-define the function we use in a non-functional programming language and invoke it when we use it, we can rely on this way to "stay safe" in a functional programming language. Declaring a function type where a function needs to be declared, passing a corresponding function literal at the time of the call, so what is the "zigzag" thing? It seems that there must be a strong evidence to reveal something that we are not aware of, and I believe the best example is the "higher order function".
The "Higher order function" is not a high-level concept, a function that accepts other functions as arguments or returns a function is a higher-order function, let's look at a simple example:
defHof(list:list[int],f:(Int)=>int): list[int]={ list Match {case list () = Nil case head:: tail = f (head):: Hof (Tail, F) }}
Hof is a high-order function that accepts a list parameter and a function parameter, which itself takes an int parameter and returns the result of an int, and Hof's job is to take each element of the list to the F-handle, put the processed result in a new list and return. In fact, you might have seen that this Hof is actually a simplified version of the map function that comes with the collection type, but we're not going to discuss the map function here, we just want to use this simplified example to help us find what we're looking for. Looking back at the function variable PlusOne that we just defined earlier, it is the kind of function that Hof can accept, so let's pass it to Hof to see:
Scala> Hof (list(1,2,3listlist(23 4)
A new list returns, with each element adding 1. Back to the place we questioned earlier, if we don't let Hof accept a function argument, but instead call PlusOne in the function implementation, we'll get a non-high-order function: Nonhof, it will look like this:
defNonhof(List:list[int]): list[int]={ list Match {case list () = Nil case head:: Tail = PlusOne (head):: Nonhof (tail) } }
OK, this is no problem, the results are all the same, then where is the difference? What are the advantages of Hof to Nonhof? I think our discussion has reached the very core of what this article is about to touch, so let's introduce another function, the key role to come up:
Double (num)=num*2 }
The new function double, which multiplies the input Int parameter by 2 and returns, to see if its parameter type "Int=>int", and PlusOne is the same type, and is also the second parameter type accepted by the higher-order function Hof, does this mean that double can also be passed to Hof? Let's try it:
Scala> Hof (list(1,2,3listlist(24 6)
Yes, indeed! It was a bit of a surprise for anyone without a functional programming, and we didn't make any changes to Hof, but it now looks like "it's a personal one", it has a new "play", its processing logic has changed dramatically, although the overall process has not changed, This is the process of iterating through each element sequentially, but at the local point, it has completely changed the way each element is processed. Have you seen the great potential of function Hof? It specifies a framework for a set of processing processes, but does not indicate the handling of individual elements, but instead of the call base on its own need to be dynamically passed in at the time of invocation. This means that each invocation of the function Hof, not just the data that is processed may be different, even the logic of the processing will change, even the same data can be processed in different ways.
The inverse function Nonhof, in the same scenario, does it have the ability to "toggle" and "plug" the local processing logic like Hof? Obviously not! Since it invokes the "one" specific function implementation in the function body, it is equivalent to embedding the implementation code of the function into the function body in a hard code way. From a higher level of abstraction, invoking another defined function directly in a function is actually a completely "cured" logic and algorithm that produces and processes this intermediate (local) value, which cannot be generated by another set of different logic and algorithms, in which case, The developer either modifies the existing Nonhof or provides another version of the Nonhof implementation for the double requirement.
Perhaps with a deep oo background and sharp-eyed programmers will immediately reflect, if in oo language, this is a typical can use the "template method + strategy" mode to reconstruct the scene, the same can be achieved in the local algorithm dynamic "switching" and "Plug and Draw", But if we compare Hof with the OO optimized design, you will find the latter cumbersome, and the implementation of each strategy is also predefined, which is far from the flexibility and development efficiency of a function literal that can be easily written in a functional programming language. But don't misunderstand, we are not comparing OO with functional programming, they have their own strengths and areas of expertise, which is another topic. Here, we just want to reveal an important feature of functional programming and the far-reaching implications of this by comparing Hof and Nonhof:
The separation of the function "type" and "value" allows the function to be passed to other higher order functions as a value (the amount of the function number), and the behavior of the higher-order function is highly flexible and changeable because of the difference of the incoming function, or it can be said to have some kind of "dynamic" feature; On the other hand, Because the changing parts are stripped to the externally passed-in functions, the higher-order functions themselves become highly reusable, all of which are natural advantages of functional programming and are directly supported at the language level, and you don't even have to design your program to easily get those benefits.
Conclusion
"Type", "value" to the value, "type" and "value" separation is the inevitable result of the function among the top citizens, and such a segmentation allows the function can be declared as "type", by "value" to pass, completely liberate the use of the function, in the higher-order function, the function of this feature is played to the extreme, It completely changed the thinking and pattern of programming, and created the functional programming of today.
Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.
Discovering functional programming from function literals