Lunch at noon, have nothing to do, respectively, using Java and C # to achieve a calculation of the PI value of the class, the same number of executions, in order to avoid some accidental errors, gu set a relatively large number of calculations 100000000, 100 million times.
The Java code is as follows:
Import java.util.*;
public class CALCPI
{
public static final int count=100000000;
public static void Main (string[] args)
{
Long Start=system.currenttimemillis ();
Random random=new Random (start);
int inside=0;
for (int i=0;i<count;i++)
{
Double cx=random.nextdouble ();
Double cy=random.nextdouble ();
Double Distance=math.sqrt ((CX*CX) + (cy*cy));
if (distance<1.0)
{
++inside;
}
}
Double pi=4* (double) inside/(double) COUNT;
Long End=system.currenttimemillis ();
Long Lasttime=end-start;
System.out.println ("pi=" +pi);
System.out.println ("Time used:" +lasttime+ "MS");
}
}
The C # code is as follows:
Using System;
public class CALCPI
{
public const int count=100000000;
public static void Main (string[] args)
{
DateTime Start=datetime.now;
Random random=new Random (Start.millisecond);
int inside=0;
for (int i=0;i<count;i++)
{
Double Cx=random. Nextdouble ();
Double Cy=random. Nextdouble ();
Double Distance=math.sqrt ((CX*CX) + (cy*cy));
if (distance<1.0)
{
++inside;
}
}
Double pi=4* (double) inside/(double) Count;
DateTime End=datetime.now;
TimeSpan Diff=end-start;
Console.WriteLine ("Pi={0}", pi);
Console.WriteLine ("Consume time: {0}ms", diff.) TotalMilliseconds);
}
}
Screen for final execution effect:
From this result, it seems that Java is not as good as C # in performing floating-point operations, and that there is no need to improve the code to make testing more scientific and fair. And it seems that the final result of C # is better than the results from Java.