IEEE binary floating-point arithmetic standard learning

Source: Internet
Author: User

See there is a project on the Internet is required to use the floating point number in the binary representation, need to use the IEEE754 standard, check the Chaviki and in-depth understanding of the computer system, re-learn the floating point in the machine's representation and memory storage,

Take a simple note first, and you need a deeper understanding behind it.

IEEE754 defines four ways to represent floating-point numbers: Single-precision (32bit), double-precision (64bit), extended single-precision (more than 43bit), extended double-precision (79bit above), the latter is rarely used, here is the first two.

Using binary to represent floating-point number three parts, the following are 32bit single-precision For example, double-precision similar can be inferred:

The three parts are: sign bit, exponent (exponent), Mantissa (significand, store binary fractional part), below is the graph on wiki:

In single-precision floating-point numbers, sign is 1bit,exponent to 8bit,significand 23bit and three parts are 32bit.

There is one more important concept of "exponential offset", in IEEE754, the exponential field in the floating-point representation (exponent) is the actual value of the exponent plus a fixed value, the fixed value of the algorithm is 2e-1-1

128-1 = 127 in a single-precision floating-point number.

The fractional part definition is f=0.fn-1fn-2...f0, the binary decimal point is to the left of the most significant bit, and the decimal is defined as M=1 + F, so M is represented as 1.fn-1...f0, by adjusting the exponent so that the effective number m is between two and one

Look at an example 8.25,

Convert to binary representation 1000.01 can be represented as 1.00001 *23

So index e = 3 + 127 = 130

So 8.25 in memory is represented as:

0 10000010 00001000000000000000000

Where the fractional part of how to convert to binary, you can click the way the fractional part, take the integer part value (0 or 1)

, and then continues to take the fractional part of the result, taking the integer portion, and looping until the desired number of digits is obtained,

such as 0.25 * = 0.5, the whole number is divided into 0, then 0.5*2=1.0, the whole number of parts is divided into 1, so the binary is represented as 0.0100000. 00

The index range is from -126~127, so when a floating-point number such as 0.25 can be expressed as 1.000*2-2

So the index offset value is-2 + 127=125

Represented in memory is

0 01111101 00000000000000000000000

These are the use of normalized values in floating-point representations, and other non-normalized and special values of two, later supplemented.

IEEE binary floating-point arithmetic standard learning

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.