Data type It's been a couple of times, is that annoying? This time we will change the angle, study the function bar!
The simple functions that have two and two input parameters and return them are as follows:
Let Add a B = a + b
This is the definition of the function. We also use let when declaring variables.
It really feels magical, and it's not as verbose as the C # language. After reading this definition, I have a feeling that C # is a lengthy language. The code to invoke the function is as follows:
let c = Add 10 20
printfn "%d" c
Among them, it's a bit weird to not use parentheses. The complete procedure is as follows:
#light
let Add a b = a + b
let c = Add 10 20
printfn "%d" c
For these codes, C # and vb.net programmers may feel a little strange. Some do not understand what is function definition and function call.
The above code should be the same if rewritten in C # code:
static void Main(string[] args) {
Func<int,int,int> Add = (a, b) => a + b;
var c = Add(10, 20);
Console.WriteLine("{0}",c);
}
F # functions are somewhat similar to the delegate of C #. But it can't be written like this:
#light
let c = Add 10 20 // Error:Can not fine Add
printfn "%d" c
let Add a b = a + b
Where the add declaration cannot be found, a compilation error occurs.
This time, we define a function that asks for an average of 3 numbers:
Let Mean a b c = (A + B + C)/3.0
The calling code is as follows:
Printfn "%f" (Mean 10 20 30)
No, there's a compilation error: The type ' float ' does not match the type ' int '.
The reason is not very clear, the individual speculated as follows:
When called, the parameter type of the mean is confirmed.
This becomes the operation of the int/double.
cannot be computed because the type is different.
A compilation error has occurred.
In this way, it is stricter than C # 's type check. The following example allows us to understand the rigor of type checking.
Let A = 10 + 1.5
There is still an error, because the type is not the same as the operation. The following modifications are available:
Let Mean a b c = double (A + B + C)/3.0
Among them, we know something new. The double is the type conversion operator.
In F #, Double is the alias of float (presumably), and the same result is replaced by a float double.
For example, the following code is OK. (Some of the code referencing the Http://code.msdn.microsoft.com/fsharpsamples)
let pi1 = float 3 + 0.1415
let pi2 = double 3 + double 0.1415
printfn "pi1 = %f, pi2 = %f" pi1 pi2
let i1 = int 3.1415
let i2 = int64 3.1415
printfn "i1 = %d, i2 = %d" i1 i2
let byteA = byte (3+4)
printfn "byteA = %d" byteA
The result:
pi1 = 3.141500, pi2 = 3.141500
i1 = 3, i2 = 3
byteA = 7