Double decimal point, double decimal point
Generally, the compiler parses a literal with a decimal point into a double value.
For example, a literal value is directly put into the code. Because it has a decimal point, the default value is double.
The output result is: 1.12345678912345 (the last number is lost). This is because the precision of the double value is so long. Is there any way to make the output result complete ?? We can declare the literal value to the decimal type, and append an M or m suffix to achieve this. There is another way to prevent the last digit from being discarded, you can use the format string and round-trip format specifier R or r for conversion. Example: string. format ("{0: R}", 1.123456789123477), output result: 1.123456789123477, the last number is still in the round-trip format. If the returned string is converted back to the value, the original value is certainly obtained. If the round-trip format is not used, the input and output values must be different.