New features in the Java Math class--floating-point numbers

Source: Internet
Author: User
Tags pow ranges



The Java™ Language Specification version 5thjava.lang.Mathaddsjava.lang.StrictMath10 new methods, and Java 6 adds 10 more. The 1th part of this two-part series introduces a very meaningful new mathematical approach. It provides a function that mathematicians are more familiar with in the era of computers that have not yet appeared. In the 2nd part, I focus mainly on these functions, which are intended to manipulate floating-point numbers rather than abstract real numbers.



As I mentioned in part 1th, the difference between a real number (such as e or 0.2) and its computer representation (such as Javadouble) is very important. The ideal number should be infinitely accurate, whereas the number of bits represented by Java is fixed (float32 bits,double64 bits).floatThe maximum value is approximately 3.4*1038. This value is not enough to indicate something, such as the number of electrons in the universe.



doubleThe maximum value is 1.8*10308, which can represent almost any physical quantity. However, when it comes to the calculation of abstract mathematical quantities, the range of these values may be exceeded. For example, just 171! (171 * 170 * 169 * 168 * ... * 1) The maximum value is exceededdouble.floatcan only represent numbers within the 35!. A very small number (a number close to 0) can also cause trouble, and it is very dangerous to calculate a very large number and very small number.



To address this problem, the floating-point math IEEE 754 standard (see resources) adds special value INF and NaN, which represent infinity (Infinity) and non-numeric (not a number), respectively. IEEE 754 also defines positive 0 and minus 0 (in general mathematics, 0 is not positive or negative, but in computer mathematics, they can be positive, and they can be negatively). These values bring confusion to the traditional principle. For example, when NaN is used, the excluded middle is not established. x = = Y or x! = y may be incorrect. When x or Y is NaN, neither of these two equations is true.



In addition to the problem of digital size, precision is a more practical problem. Look at this common loop, add 1.0 to 10 times and wait until the result is not 10, but 9.99999999999998:


for (double x = 0.0; x <= 10.0; x += 0.1) { 
    System.err.println(x); 
 }


For simple applications, you usually have tojava.text.DecimalFormatformat the final output as the integer that is closest to its value, so you can. However, for scientific and engineering applications (you cannot determine whether the computed result is an integer), you need to be more careful. If you need to perform subtraction between a particularly large number to get a smaller number, you need to be extremely careful. It also needs to be noted if a particular small number is used as a divisor. These operations can turn small errors into big errors and have a huge impact on real-world applications. A small rounding error caused by a finite-precision floating-point number can severely distort mathematical accuracy calculations.


Binary representations of floating-point and double-precision numbers


The IEEE 754 floating-point number implemented by Java has 32 bits. The first bit is the sign bit, 0 means positive, and 1 is negative. The next 8 bits represent the exponent, and its value ranges from 125 to +127. The last 23 bits represent the mantissa (sometimes called a valid number), and its value ranges from 0 to 33,554,431. Together, the floating-point number is expressed in this way:signmantissa* 2exponent.



A keen reader may have noticed something wrong with these numbers. First, the 8 bits that represent the exponent should be from 128 to 127, just like signed bytes. However, the deviation of these indices is 126, that is, the unsigned value (0 to 255) minus 126 to get the true exponent (now is from-126 to 128). But 128 and 126 are special values. When the exponent is 1 bits (128), it indicates that the number is INF,-inf, or NaN. To determine the situation, you must look at its mantissa. When the exponent is 0 bits (-126), it indicates that the number is not normal (described in more detail later), but the exponent is still-125.



The mantissa is generally a 23-bit unsigned integer-it's very simple. The 23-bit can accommodate 0 to 224-1, or 16,777,215. Wait a minute, did I just say that the mantissa range is from 0 to 33,554,431? That is, 225-1. Where did the extra one come from?



Therefore, the exponent can be used to indicate what the 1th bit is. If the exponent is 0 bits, the 1th bit is 0. Otherwise the 1th bit is 1. Because we usually know what the 1th bit is, there is no need to include it in the numbers. You get an extra bit of "free". Isn't that weird?



A floating-point number with a 1th bit of 1 in the mantissa is normal . That is, the value of the mantissa is usually between 1 and 2. A floating-point number with a 1th bit of 0 in the mantissa is not normal , although the exponent is usually-125, but it is usually able to represent smaller numbers.



The double-precision number is encoded in a similar way, but it uses the 52-bit mantissa and the 11-bit exponent to achieve higher precision. The deviation of the exponent of the double-precision number is 1023.



Back to top of page


Mantissa and exponent


The two methods added in Java 6getExponent()return the unbiased exponent when representing a floating-point number or a double-precision degree. For floating-point numbers, the range is 125 to +127, and for double-precision numbers, the range is 1022 to +1023 (INF and NaN are +128/+1024). For example, listing 1 compares the results of a method based on the more common 2-based logarithmgetExponent():


Listing 1.Math.log(x)/Math.log(2)AndMath.getExponent()
 public class ExponentTest { 

    public static void main(String[] args) { 
       System.out.println("x\tlg(x)\tMath.getExponent(x)"); 
       for (int i = -255; i < 256; i++) { 
           double x = Math.pow(2, i); 
           System.out.println( 
                   x + "\t" + 
                   lg(x) + "\t" + 
                   Math.getExponent(x)); 
       } 
    } 

    public static double lg(double x) { 
        return Math.log(x)/Math.log(2); 
    } 
 }


For some values that use rounding, it isMath.getExponent()more accurate than the general calculation:


x              lg(x)             Math.getExponent(x) 
 ... 
 2.68435456E8    28.0                      28 
 5.36870912E8    29.000000000000004        29 
 1.073741824E9   30.0                      30 
 2.147483648E9   31.000000000000004        31 
 4.294967296E9   32.0                      32


If you want to perform a large number of such calculations, itMath.getExponent()will be faster. However, it is important to note that it only applies to the power of calculation 2. For example, when you change to a power of 3, the result is as follows:


x      lg(x)     Math.getExponent(x) 
 ... 
 1.0    0.0                 0 
 3.0    1.584962500721156   1 
 9.0    3.1699250014423126  3 
 27.0   4.754887502163469   4 
 81.0   6.339850002884625   6


getExponent()Does not handle the mantissa, which is handled by the mantissaMath.log(). With some steps, you can find the mantissa, take the logarithm of the mantissa, and add the value to the exponent, but it's a little laborious. It is useful if you want to quickly estimate the order of magnitude (rather than the exact value)Math.getExponent().



UnlikeMath.log(), theMath.getExponent()NaN or INF is never returned. If the parameter is NaN or INF, the result of the corresponding floating-point and double-precision numbers is 128 and 1024, respectively. If the parameter is 0, then the results of the corresponding floating-point and double-precision numbers are-127 and-1023, respectively. If the argument is negative, the exponent of the number is the same as the exponent of the absolute value of the number. For example, the index of 8 is 3, which is the same as the 8 index.



There is no correspondinggetMantissa()method, but a simple mathematical knowledge can be used to construct a:


 public static double getMantissa(double x) { 
        int exponent = Math.getExponent(x); 
        return x / Math.pow(2, exponent); 
    }


Although the algorithm is not obvious, it is possible to find the mantissa by bit masking. To extract a bit, you only need to calculateDouble.doubleToLongBits(x) & 0x000FFFFFFFFFFFFFL. However, you then need to consider the extra 1 bits in the normal number, and then convert the floating-point numbers from 1 to 2.



Back to top of page


Smallest unit of precision


The real numbers are very dense. Any two different real numbers can appear in the middle of other real numbers. But floating-point numbers are not. For floating-point and double-precision numbers, there is also the next floating-point number, with the smallest finite distance between successive floating-point numbers and double-precision numbers.nextUp()method returns the nearest floating-point number that is larger than the first argument. For example, listing 2 prints out all the floating-point numbers between 1.0 and 2.0:


Listing 2. Calculate the number of floating-point numbers
 public class FloatCounter { 

    public static void main(String[] args) { 
        float x = 1.0F; 
        int numFloats = 0; 
        while (x <= 2.0) { 
            numFloats++; 
            System.out.println(x); 
            x = Math.nextUp(x); 
        } 
        System.out.println(numFloats); 
    } 

 }


The result is that there are 8,388,609 floating-point numbers between 1.0 and 2.0, though many, but not infinite. The distance between adjacent digits is 0.0000001. This distance is called the ULP, which is the minimum unit of precision Units (unit of least precision) or the last position units (unit in the end place) of the abbreviation.



If you need to look backward for the nearest floating-point number that is less than the specified digit, you can use thenextAfter()method instead. The second parameter specifies whether to find the most recent number above or below the first parameter:


public static double nextAfter(float start, float direction) 
 public static double nextAfter(double start, double direction)


Ifdirectionitstartis greater than, thenextAfter()startnext number above is returned. Ifdirectionitstartis less than, thenextAfter()startnext number below is returned. Ifdirectionitstartis equal, itnextAfter()returnsstartitself.



These methods are useful in some modeling or graphical tools. Digitally, you may need to extract sample values from 10,000 locations between a and b , but if you have the precision to identify only 1,000 separate points between a and b , Then nine-tenths of the work is repetitive. You can do only one-tenth of the work, but you get the same results.



Of course, if additional precision is required, you can choose a data type with high precision, such asdoubleorBigDecimal. For example, I've seen this happen in the Mandelbrot collection manager. In which you can enlarge the graph so that it falls between the nearest two double-precision numbers. The Mandelbrot collection is very subtle and complex at all levels, butfloatdoubleit can reach this subtle level before losing the ability to differentiate adjacent points.



Math.ulp()Returns the distance between a number and its nearest number. Listing 3 shows the ULP of 2 of the various power parties:


Listing 3. ULP of the power of floating-point number 2
 public class UlpPrinter { 

    public static void main(String[] args) { 
        for (float x = 1.0f; x <= Float.MAX_VALUE; x *= 2.0f) { 
            System.out.println(Math.getExponent(x) + "\t" + x + "\t" + Math.ulp(x)); 
        } 
    } 

 }


Some of the output is given below:


0   1.0   1.1920929E-7 
 1   2.0   2.3841858E-7 
 2   4.0   4.7683716E-7 
 3   8.0   9.536743E-7 
 4   16.0  1.9073486E-6 
 ... 
 20  1048576.0   0.125 
 21  2097152.0   0.25 
 22  4194304.0   0.5 
 23  8388608.0   1.0 
 24  1.6777216E7 2.0 
 25  3.3554432E7 4.0 
 ... 
 125 4.2535296E37    5.0706024E30 
 126 8.507059E37     1.0141205E31 
 127 1.7014118E38    2.028241E31


As you can see, the floating-point number is very accurate for a small 2 power-party. However, in many applications, this precision is problematic when the value is approximately 220. When approaching the maximum limit of a floating-point number, the adjacent values are separated by the seven powers (sextillions) of the thousand (which may actually be larger, but I can't find the words to express them).



As shown in Listing 3, the size of the ULP is not fixed. As the numbers become larger, the number of floating-point points between them becomes smaller. For example, there are only 1,025 floating-point numbers between 10,000 and 10,001, and their distance is 0.001. There are only 17 floating-point numbers between 1,000,000 and 1,000,001, and their distance is 0.05. The precision is inversely proportional to the order of magnitude. For floating-point 10,000,000,ulp, the accuracy becomes 1.0, and after that number, multiple integer values are mapped to the same floating-point numbers. For double-precision numbers, this only happens when you reach 4.5E15, but this is also a problem.



The finite precision of floating-point numbers can lead to an unpredictable result: X+1 = = X is true when a certain point is exceeded. For example, the following simple loop is actually infinite:



for (float x = 16777213f; x < 16777218f; x += 1.0f) {
System.out.println(x); 
}



In fact, the loop will stop at a fixed point and the exact number is 16,777,216. This number equals 224, at which point the ULP is larger than the increment.



Math.ulp()Provides a practical use for testing. Obviously, we don't generally compare two floating-point numbers to be exactly equal. Instead, we check that they are equal within a certain range of fault tolerance. For example, in JUnit, the actual floating-point values that are expected to be compared are as follows:


Assertequals (Expectedvalue, Actualvalue, 0.02);


This indicates that the actual value deviates from the expected value within 0.02. However, is 0.02 a reasonable range of fault tolerance? If the expected value is 10.5 or-107.82, then 0.02 is fully acceptable. But when the expected value is billions of, 0.02 is no different from 0. In general, it is a relative error to consider when testing ULP. The general choice of fault tolerance ranges from 1 to ULP, depending on the accuracy required for the calculation. For example, the following specifies that the actual result must be within 5 ULP of the True value:


Assertequals (Expectedvalue, Actualvalue, 5*math.ulp (Expectedvalue));


Depending on the expected value, this can be either one-zero or millions of.



Back to top of page


scalb


Math.scalb(x, y)Multiply x by 2y, whichscalbis the abbreviation for "scale binary (binary)".


public static double Scalb (float F, int. scalefactor) public  static double Scalb (double D, int scalefactor)


For example,Math.scalb(3, 4)return 3 * 24, which is 3*16, and the result is 48.0. can also be usedMath.scalb()to implementgetMantissa():


public static double getMantissa(double x) { 
    int exponent = Math.getExponent(x); 
    return x / Math.scalb(1.0, exponent); 
 }


Math.scalb()Andx*Math.pow(2, scaleFactor)What is the difference? In fact, the end result is the same. The value returned by any input is exactly the same. However, there are differences in performance.Math.pow()the performance is very bad. It must be able to really deal with some very rare situations, such as 3.14 with a power-0.078. For small integer powers, such as 2 and 3 (or 2 as cardinality, which is more special), a completely wrong algorithm is usually chosen.



I am concerned that this will have an impact on overall performance. Some compilers and VMS have a higher level of intelligence. Some of the optimizer willx*Math.pow(2, y)recognize it as a special case and convert it toMath.scalb(x, y)something similar. Therefore, the performance impact is not reflected. However, I'm sure some VMs are not so smart. For example, testing with Apple's Java 6 VMS isMath.scalb()almost alwaysx*Math.pow(2, y)two orders of magnitude faster. Of course, this usually does not affect. However, in special cases, such as performing millions of exponentiation operations, you need to consider whether you can convert them to useMath.scalb().



Back to top of page


Copysign


Math.copySign()method sets the marker for the first parameter to the marker for the second parameter. The simplest implementation is shown in Listing 4:


Listing 4. possible to achievecopysignAlgorithm
public static double copySign(double magnitude, double sign) { 
    if (magnitude == 0.0) return 0.0; 
    else if (sign < 0) { 
      if (magnitude < 0) return magnitude; 
      else return -magnitude; 
    } 
    else if (sign > 0) { 
      if (magnitude < 0) return -magnitude; 
      else return magnitude; 
    } 
    return magnitude; 
 }


However, the real implementation is shown in Listing 5:


Listing 5. Fromsun.misc.FpUtilsThe real algorithm
 public static double rawCopySign(double magnitude, double sign) { 
    return Double.longBitsToDouble((Double.doubleToRawLongBits(sign) & 
                                   (DoubleConsts.SIGN_BIT_MASK)) | 
                                   (Double.doubleToRawLongBits(magnitude) & 
                                   (DoubleConsts.EXP_BIT_MASK | 
                                   DoubleConsts.SIGNIF_BIT_MASK))); 
 }


A closer look at these bits will see that the NaN token is considered positive. Strictly speaking,Math.copySign()this is not guaranteed, but it is up to theStrictMath.copySign()responsible, but in reality, they all call the same bit-handling code.



Listing 5 may be faster than listing 4, but its main purpose is to properly handle negative 0.Math.copySign(10, -0.0)returns 10, andMath.copySign(10, 0.0)returns 10.0. The simplest form of the algorithm in Listing 4 returns 10.0 in both cases. A negative 0 can occur when performing sensitive operations, such as dividing the number of very small negative doubles by the maximum positive double precision number. For example, a-1.0E-147/2.1E189negative 0 is returned and1.0E-147/2.1E189positive 0 is returned. However,==when you compare these two values, they are equal. Therefore, if you want to differentiate between them, you must useMath.copySign(10, -0.0)orMath.signum()(callMath.copySign(10, -0.0)) to perform the comparison.



Back to top of page


Logarithmic and exponential


The exponential function is a good example of how careful it is to handle a finite-precision floating-point number, rather than an infinite-precision real number. ex () appears in many of the equationsMath.exp(). For example, it can be used to define the cosh function, which is already discussed in part 1th:



Cosh (x) = (ex+ e-X)/2



However, for negative x, which is typically a number below 4, the algorithm used forMath.exp()the calculation behaves poorly and is prone to rounding errors. Using another algorithm to calculate ex-1 is more accurate and then adds 1 to the final result.Math.expm1()able to implement this different algorithm (m1means "minus 1"). For example, the cosh function given in Listing 6xswitches between two algorithms based on the size:


Listing 6. cosh function
 public static double cosh(double x) { 
    if (x < 0) x = -x; 
    double term1 = Math.exp(x); 
    double term2 = Math.expm1(-x) + 1; 
    return (term1 + term2)/2; 
 }


This example is somewhat stiff, becauseMath.exp()theMath.expm1() + 1difference between the and is obvious, often using ex instead of e-X. However,Math.expm1()it is very useful in financial calculations with multiple interest rates, such as the day interest rate for Treasury bills.



Math.log1p()Just theMath.expm1()opposite, asMath.log()withMath.exp()the relationship. It calculates the logarithm and parameters of 1 (1pindicating "plus 1"). Use this function in a number that has a value close to 1. For example, you should use itMath.log1p(0.0002)to calculate insteadMath.log(1.0002).



Now, for example, let's say you need to know how many days it takes for the $1,000 to grow to $1,100 with a daily interest rate of 0.03. Listing 7 completes this calculation task:


Listing 7. Calculate the time required to grow from the current investment to a specific future value
public static double calculateNumberOfPeriods( 
        double presentValue, double futureValue, double rate) { 
    return (Math.log(futureValue) - Math.log(presentValue))/Math.log1p(rate); 
 }


In this case,1pthe meaning is easy to understand because 1+Ris usually present in the general formula for calculating similar data. In other words, although the investor is keen to obtain the initial investment cost (1+R)N, the lender usually takes the interest rate as an additional percentage (+R part). In fact, it would be very bad for investors with a 3% interest rate to recover only 3% of the cost of the investment.



Back to top of page


Conclusion


Floating-point numbers are not real numbers. The number of them is limited. They can represent the maximum and minimum values. More notably, their precision is high, but they are narrow in scope and prone to rounding errors. Conversely, floating-point and double-precision numbers deal with integers with far greater precision than integer and long-form numbers. You must carefully consider these limitations, especially in scientific and engineering applications, to produce robust and reliable code. For financial applications (especially accounting applications that need to be accurate to the last one), handling floating-point numbers and double-precision degrees also requires extra care.



java.lang.MathAndjava.lang.StrictMathclasses have been carefully designed to solve these problems. Proper use of these classes and the methods they contain can improve the program. This article specifically shows how ingenious a good floating-point algorithm is! It is best to use the algorithm provided by the expert instead of your own original algorithm. If appropriatejava.lang.Mathand thejava.lang.StrictMathmethod provided in, it is best to continue to use. They are usually the best choice.



Original: http://www.ibm.com/developerworks/cn/java/j-math2.html



New features in the Java Math class--floating-point numbers


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.