Int variables sometimes occupy 4 bytes (in Win32) and sometimes 2 bytes (in DOS)
Is the size of the int type related to the compiler, CPU, or operating system?
The so-called 16-bit 32-bit 64-bit system is determined by the CPU, determined by the addressing of machine instructions and the number of register digits.
OS is limited by the CPU, but 16-bit OS can run under 32-bit CPU (as mentioned above, pure DoS)
Many operating systems are forward compatible. Program It can also run. If the compiler itself is made in the 16-bit era, the OS will provide a 16-bit subenvironment for the compiler to use.
The length of int and void * should be the same (when 16 bits are used, the 20 bits pointer is composed of two 16 bits)
Int Is only a keyword in the language definition and only visible to compiler. complier says that it has only a few digits and has nothing to do with OS/CPU.
Sizeof is always the safest method, but sizeof is only a compilation constant, and cannot achieve binary compatibility (porting)
Description
INT and void * are as long as they are generally stored in a single register. In fact, it is not very accurate to say that, but it should not be well understood, int On XX-bit CPU
The number of bits is only because the CPU has a single command to operate XX bits of data (because the register is XX bits) (there may be extended commands, which I don't know, but the key depends on the number of digits in the register) to compile
The interpreter makes int a xx bit for convenience. For example, if a 64-bit machine is released in the future, the compiler may just extend long to 64-bit, int is also suitable for 32.
Well, that's why many programs do not use int, short, long, but define int32_t, int16_t, uint32_t ,..., in the future, no matter how the CPU/compiler changes, it only needs to change its typedef
Yes
I don't think short, unsigned short (or word) must be defined as 16 bits and long, unsigned
Long (DWORD) must be defined as 32-bit? I believe in word and DWORD, because they are typedef in M $ VC, and the rest is not dependent on compilation.
?
The reason why float exists is because of floating point numbers. The reason why double exists is because of the need to process floating point numbers with higher precision. The reason why int exists is because of integers. The reason why long exists is because it needs to process integers with larger value ranges, the reason why short exists is to save space and process integers with smaller values.
Int means that when you need to define a Circular Variable (I = 0; I <100; I ++), you don't have to worry about using long or short, do you want to add unsigned .............
The reason why the standard does not specify the number of digits of int, short, and long is to leave them to the compiler for its own decision, that is, the compiler can evolve as the hardware evolves.