Double Type Precision Control of c #
C # decimal precision
In c #, if you want to retain a certain degree of precision for decimal places of the double type decimal places, you can use parameter restrictions when converting them into strings. The following program demonstrates this practice.
Using System; using System. collections. generic; using System. linq; using System. text; using System. threading. tasks; namespace double Precision {// calculate the average value of an integer array, retain two decimal places; class Program {static void Main (string [] args) {int [] nums = {1, 2, 3, 4, 5, 6, 9}; double avg = nums. average (); Console. writeLine (avg); Console. writeLine ("{0: 0. 00} ", avg); string str = avg. toString ("0.00"); Console. writeLine (str); Console. readKey ();}}}
In fact, both the ToString and the standardized output are converted to a string first. Therefore, when Console. WriteLine () calls ToString, it will pass the value following the colon as a parameter.
It is equivalent to Console. WriteLine (avg. ToString (0.00 ));
Running result