People like to talk about the concept of closures. In fact, this concept is not used for writing code, write code only need to master the lambda expression and class+interface semantics. Basically, only when writing compilers and virtual machines is it necessary to manage what is a closure. But because of the theme of the series, I'm here to tell you what the closure is. Before we understand the closure, we need to understand some common rules for argument passing and symbol resolving.
The first is called by value. This rule is familiar to all of us, because all the popular languages do it. You remember when you first started to learn programming, there is always a topic in the book, said:
void Swap (int a, int b)
{
int t = A;
A = b;
b = t;
}
int main ()
{
int a=0;
int b=1;
Swap (A, b);
printf ("%d,%d", A, b);
}
Then ask what the program will output. Of course we all know now that A and B are still 0 and 1, not subject to change. This is called by value. If we modify the rules so that the arguments are always passed in by reference, swap causes the main function to eventually output 1 and 0, which is called by reference.
In addition, a less common example is called by need. Call by need this thing is an important rule in some well-known practical functional languages (such as Haskell), saying that if an argument is not used, it is not executed when it is passed in. Sounds like a bit of a mystery, I still use C to give an example.
int Add (int a, int b)
{return
a + b;
}
int Choose (bool i, int a, int b)
{return a
a:b;
}
int main ()
{
int r = Choose (False, add (1, 2), add (3, 4));
printf ("%d", r);
}
How many times will this program add be invoked? We all know it's two times. But in Haskell, it will only be called once. Why, then? Because the first argument of choose is false, the return value of the function depends only on B, not on a. So in the main function it feels this, so only add (3, 4), not add (1, 2). But do not think that this is because the compiler optimization time inline with this function to do so, Haskell this mechanism is at run time. So if we write a fast sort algorithm and then sort an array and output only the first number, then the entire program is O (n) time complexity. Because the average case of a quick sort takes only O (n) time when the first element is determined. Plus the entire program output only the first number, so the back of the he does not calculate, so the whole program is also O (n).
So everyone knows call by name, call by reference, and call by need. Now let's tell you a magical rule called by name. This is a magic rule, and I don't think I can manage it to write a correct program. Let me give you an example:
int Set (int a, int b, int c, int d)
{
a = B;
A + = C;
A = = D;
}
int main ()
{
int i = 0;
int x[3] = {1, 2, 3};
Set (x[i++], ten, 1000);
printf ("%d,%d,%d,%d", x[0], x[1], x[2], i);
The C language has learned that the program actually did nothing. The value of x and I is {1111, 2, 3}, and 1, respectively, if you change the call by value of C to reference. But we know that the human imagination is so rich that we invent a rule called call by name. The call by name is also reference, but the difference is that every time you use an argument, the program executes the expression that evaluates the parameter. Therefore, if you change the call by value of the C language to called by name, the above program does what is actually:
x[i++] + + 10;
x[i++] + + 100;
x[i++] + + 1000;
After the program is executed, the values for x and I are {11, 102, 1003}, and 3.