In a discussion this week, some people say that (a + (B + c) is equal to (a + B) + c), when a, B, c is all simple data types, such as int, float, double...
This is certainly true in mathematics, but not in code. First consider System. Int32 and test. cs below:
Using System;
Class Program
{
Static void Main (string [] args)
{
Int a = int. MaxValue;
Int B = 1;
Int c =-;
Try {Console. WriteLine (a + (B + c ));}
Catch (Exception e) {Console. WriteLine (e. Message );}
Try {Console. WriteLine (a + B) + c );}
Catch (Exception e) {Console. WriteLine (e. Message );}
}
}
Run test.exe using the csc.exe test. cs compilation code. The result is as follows:
1
1
It is easy to understand. Run csc.exe/checked test.csto compile and run test.exe. The result is as follows:
1
Arithmetic operation resulted in an overflow.
Therefore, there are indeed differences in the order of operations. Now we will consider the next more interesting example: floating-point numbers... float
Using System;
Class Program
{
Static void Main (string [] args)
{
Float a = float. MaxValue;
Float B =-;
Float c =-1;
Console. WriteLine (a + (B + c ));
Console. WriteLine (a + B) + c );
}
}
Run csc.exe test.csto compile and run test.exe. The result is as follows:
0
-1
Now I want to ask you a question:
If you use csc.exe/checked test. cs to compile and run test.exe, what is the result? Why?
Original article: When (a + B) + c! = A + (B + c )...
Author: LoveJenny