So far, although we've experienced the difference between F # integers and floating-point numbers, what's the difference between short, int, double float? Try to look at the following code:
let num1 = 10s
let num2 = 10
let num3 = 10L
let num4 = 10.0
let num5 = 10.f
These numeric constants are written in a similar notation to C #.
But is this consistent with what we expected, and can be confirmed by practice. After a little investigation, I found the F # Interactive feature. You can use the ALT + ENTER to display the F # Interactive window under the main window to investigate them. The output results are as follows:
val num1 : int16
val num2 : int
val num3 : int64
val num4 : float
val num5 : float32
F # 's float is equivalent to C # 's double. And float32 is the float of C #, indeed a bit confusing. But actually using float32 almost no, 10.0 similar constant declarations are OK.
Other types of wording are as follows:
10u // uint
10us // ushort
10ul // ulong
120y // sbyte
0xFFuy // byte
Because it is. NET language, we can also try the decimal type:
Let NUM6 = 10.5m