1. Implicit conversion
c is implicitly converted in the following four scenarios:
1, in arithmetic expressions, the low type can be converted to a high type.
2 . In an assignment expression, the value of the right expression is automatically converted to the type of the left variable and assigned to him.
< Span style= "" > 3, When a parameter is passed in a function call, the system implicitly converts the argument to the type of the formal parameter and assigns the formal parameter.
4 . When the function has a return value, the system implicitly converts the return expression type to the return value type and assigns the value to the calling function.
2
. Implicit conversion of arithmetic operations
in arithmetic operations, the following types of conversion rules are first:
1, characters must first be converted to integers (c language specifies that the character type data and integer data can be common).
2. The short type is converted to int type (same as integral type).
3, float type data in the operation is converted to double-precision type (double), in order to improve the accuracy of the operation (the same is the real type).
Second, there are the following rules.
When different types of data are manipulated, they should first be converted to the same data type and then manipulated, and the conversion rules are converted from low-level to high-level. The conversion rules are as follows:
The arithmetic problem between signed number and unsigned number
The following experiments are run through virual c++6
This question tests whether you understand the C language principle of automatic conversion of integers , some developers know very little about these things. When there are signed and unsigned types in an expression, all operands are automatically converted to unsigned types . Therefore, in this sense, the operation priority of the unsigned number is higher than the signed number, which is very important for embedded systems that should be used frequently for unsigned data types. The
begins with an experiment that defines a signed int and unsigned int data, and then compares the size:
unsigned int a=20;
signed int b=-130;
A>b? or B>a? The experiment proves that b>a, that is to say, -130>20, why do such results occur?
This is because in the C language operation, if the operation between the unsigned number and the signed number is encountered, the compiler will automatically convert to an unsigned number for processing, so a=20,b=4294967166, so the comparison goes on of course b>a.
Give another example:
unsigned int a=20;
signed int b=-130;
std::cout<<a+b<<std::endl; The
result output is 4294967186, the same reason, before the operation, A=20,b is converted to 4294967166, so a+b=4294967186
Subtraction and multiplication result in a similar operation.
If the type of B is not affected by the operation between B=-130,b and immediate number as signed int data, the result of the operation is still signed int:
signed int b=-130;
std::cout<<b+30<<std::endl;
The output is-100.
For floating-point numbers, floating-point numbers (float,double) are actually signed, and the unsigned and signed prefixes cannot be added to the float and double, and there is no question of the conversion between the signed number root unsigned numbers.
[CPP]View Plaincopy
- #include <iostream>
- /*
- When symbol types and unsigned types are present in an expression
- all operands are automatically converted to unsigned type
- */
- using namespace std;
- Char GetChar (int x,int y) {
- Char C;
- unsigned int a=x;
- unsigned int b=a+y;
- (a+y>10)? (c=1):(c=2);
- return C;
- }
- void Main () {
- Char C1=getchar (7,4);
- Char C2=getchar (7,3);
- Char C3=getchar (7,-7);
- Char C4=getchar (7,-8);
- printf ("c1=%d\n", C1);
- printf ("c2=%d\n", C2);
- printf ("c3=%d\n", C3);
- printf ("c4=%d\n", C4);
- System ("pause");
- }
Answer: C1 = 1 c2= 2 c3= 2 c4= 1
Such a question, is said to be Microsoft interview question:
unsigned int i=3;
Cout<<i *-1; Ask what the result is.
First reaction:-3. But the result does not seem to be the case, wrote a program, ran a bit, found that:4294967293. A very strange figure, how can I not understand why it is such an odd number. But when I found out that the hexadecimal number was fffffffd, I thought I was close to the answer ...This involves conversion of data types in expressions that are mixed with different data types. Before summarizing the conversion problem, explain the various data types (only the numeric type), and the following table is from MSDN:
Add to the above table: 1) on a 32-bit machine, the int and unsigned int types are 32 bits (4 bytes). 2) The enum determines the type according to the maximum value, generally int, and if it exceeds the range represented by the int type, it is represented by the smallest type larger than the int type (unsigned int, longor unsigned long) 3) about the size of the type. The size of the type is generally compared with the range of data that can be represented, such as char type <unsigned char type <short type ... In expressions, it is generally converted from a small type to a large type (except coercion type conversions).The following combination of their own information, plus their own constantly to give a variety of programming, summed up on the type conversion (only in arithmetic expressions about the integer type of conversion) some problems (if there is missing, welcome to add, grateful) 1, all data types that are smaller than the int type (including char,signed Char , unsigned char,short,signed short,unsigned short) is converted to an int type. If the converted data exceeds the range represented by the int type, then convert to unsigned int, 2, bool to int, false to 0,true to 1, and 0 to False when all integer types are converted to bool. Other non-0 values are converted to true;3, if the expression is mixed with unsigned short and int, if the int data can represent all unsigned short, then the data of the unsigned short type will be converted to int type, otherwise, Both the unsigned short type and the int type are converted to unsigned int types. For example, on a 32-bit machine, int is 32 bits, the range –2,147,483,648 to 2,147,483,647,unsigned short is 16 bits, the range is 0 to 65,535, so the int is sufficient to represent unsigned Short type of data, so in the combination of the two operations, unsigned short type data is converted to int, 4, unsigned int and long type of conversion law with 3, on the 32-bit machine, unsigned int is 32 bits, the range 0 to The 4,294,967,295,long is a 32-bit, range –2,147,483,648 to 2,147,483,647, and the long type is not sufficient to represent all unsigned int, so in an expression mixed with unsigned int and long, Both are converted to unsigned long;5, and if an expression has both int and unsigned int, all int data is converted to the unsigned int type.After this summary, the answers to the questions raised above should be obvious. In the expression i*-1, I is the unsigned int, 1 is an int (the type of the constant integer is the same as the enum), and by 5th it is known that 1 must be converted to the unsigned int, or 0xffffffff, the decimal4294967295, and then multiply with I, namely 4294967295*3, if overflow is not considered, the result is 12884901885, hexadecimal 0x2fffffffd, because unsigned int can only represent 32 bits, So the result is 0xfffffffd, which is 4294967293.
An implicit conversion rule for arithmetic operations in C + +