This interview question come from a famous communication firm of China. : )
1#include <iostream>2#include <stdio.h>3#include <string.h>4#include <conio.h>5 using namespacestd;6 7 intMain ()8 {9 floatA =1.0f;Ten Onecout <<sizeof(int) <<endl;//4 Acout <<sizeof(float) <<endl;//4 - -cout << (int) a << Endl;//1 thecout << &a << Endl;//(Get a ' s hexadecimal address and is 0012ff7c) -cout << (int) &a << Endl;//(int) &a://(cast a ' s address to decimal integer,1245052) -cout << (int&) A <<Endl; - +}
//(int&) A: The reference to A is cast to an integer, meaning that the memory in which a is located, Originally defined as a float type and initially 1.0f, but now I want to interpret this memory by int type (that is, the data in the memory address where a is stored is represented by the float type, you must interpret it by the int type).
1.0f is stored in memory as
0 011 1111 1 000 0000 0000 0000 0000 0000.
interprets him as 2^29+2^28+2^27+2^26+2^25+2^24+2^23=1065353216
to understand the above conversion requires the following knowledge points:
1, about decimal fractional to binary
The number is less than 1 decimal (such as 0.5) by 2 rounding method, such as 0.5 of the binary is 0.5*2=1, take an integer 1, so the binary is 0.1. The binary of 12.5 is (2) +0.5 (2) =1100.1
2, about how floating-point numbers are stored in memory
in storage mode, float,double and integers are stored in different ways, integers are in memory in sequential storage, although there is a small end point of the big-endian, But the storage is sequential. However, single-precision and double-precision are not stored in sequential storage but are implemented according to certain flags of IEEE, each with their own flags encoding:
floating-point variables are occupied in computer memory 4 bytes ( Byte ), that 32-bit . Follow the IEEE-754 format standard. A floating-point number consists of 2 parts: base m and exponent e.
±mantissax2^exponent
(note that in the formula Mantissa and exponent using binary notation)
such as a single precision number 12.5 in memory can be expressed as, 12.5 (2) =1100.1=1.001*
The base section uses a binary number to represent the actual value of this floating point.
The exponential portion occupies a binary number of 8-bit, which can represent a range of 0-255. However, the index should be positive, so the IEEE stipulates that the sub-square (that is, from memory storage) should be subtracted from 127 is the true exponent (the actual exponent, such as 12.5 conversion to binary: 1100.100 = 1.100100*23, 3 is the actual index). So the float index can be from 126 to 128.
The base part is actually a value that occupies 24-bit, because its highest bit is always 1, so the highest bit is omitted from storage, only 23-bit in storage. So far, the base section 23 bits plus the exponential portion 8 bits use 31 bits. So, as I said earlier, float is a 4-byte 32-bit, so what else does one do? There is, in fact, the highest bit in 4 bytes, which indicates the positive or negative of the floating-point number, when the highest bit is 1 o'clock, is negative, and the highest bit is 0 o'clock, which is a positive number.
In other words, we can assume that the encoding of float in the small-end CPU should be:
31<-------------------------------------------------0
S (1bit) | E (8bits) | M (23bits) |
That
-----------------------------------------------------
Addr0+3 addr0+2 addr0+1 ADDR0
Seeeeeee emmmmmmm mmmmmmmm mmmmmmmm
-----------------------------------------------------
S: Indicates that the floating-point number is positive or negative,1 is negative, and0 is positive.
E: Index plus 127 Binary number of the value after
M: base of 24-bit (store 23-bit only)
However, it is worth noting that the floating-point number is 0 o'clock, the index and the base are 0, but the previous formula is not established. Because 2 of 0 is 1, so 0 is a special case. Of course, this exception does not have to think of interference, the compiler will automatically identify.
To this, 1.0f in-memory binary representation is 1.0 (2) =1.0 in-memory storage as: 0 011 1111 0000 0000 00000000 0000=0X3F800000
put him on . Integral type number interpreted as 2^29+2^28+2^27+2^26+2^25+2^24+2^23=1 065353216
1 //(int&) a equals * (int*) &a, * (int*) (&a), * ((int*) &a)2 3 floatb =0.0f;4cout << (int) b << Endl;//05cout << &b <<Endl;6cout << (int&) b << Endl;//07cout << Boolalpha << ((int) b = = (int& B) << Endl;//output True because of 0==0;8 return 0;9}
To this basically the single-precision in memory of the way to solve, but this is only in the small end of the X86, the big-endian representation also depends on the standards given by the IEEE.
The question of how to store float data types should be a more difficult one. : )
What's difference in (int) A, (int&) a,&a,int (&a)?