If you often use C language, the most common statement is to print and output printf, this is a strange function, you can directly output the string char *, you can also add several different parameters at the end, you can it can be left blank, and the limitations are too loose. If you define the printf according to the usual function method, it is definitely not good, like int printf (char *, INT) right, so the ellipsis is used to deal with this situation. If you do not even know the user scenario when writing a function, simply write the ellipsis in the parameter list so that you can use this function. he can define the parameter type and number of parameters of the function by himself.
IntPrintf (Const Char*...);
You can include only one char * parameter or several more parameters, but the only clear difference is that you must include at least one char * parameter, because the required parameter type is displayed here.
Printf ("Hello, % s \ n", Username); printf ("Hello world \ n");
Of course, if you don't even bother to define the char * type one day, it's okay to declare the function.
IntPrintf (...);
In this way, you can write any parameters, the number of any parameters, and any parameter types in this function. However, in fact, this is the most troublesome for function writers and requires pre-processing of various unknown type, at least prefix all built-in types
Va_start
# DefineVa_start (AP, V) (AP = (va_list) & V + _ intsizeof (v ))
AP
Object of Type va_list that will hold the information needed to retrieve the additional arguments with va_arg.
The
Paramn
Parameter Name of the last named parameter in the function definition.
# Define_ Intsizeof (N) (sizeof (n) + sizeof (INT)-1 )&~ (Sizeof (INT)-1 ))
The purpose of this macro is to ensure that the return value of _ intsizeof must be an integer multiple of the int size, that is, the minimum value of _ intsizeof is 4, because sizeof (INT) is four bytes, so the returned value can only be 4N (N is a positive integer ),
Here, I can only explain the macro, I can only explain, I still don't know how to deduce, sizeof (INT) -The binary values of 1 = 3 and 3 are 11B. Because Int Is four bytes in windows, one byte is eight bits, so it is 32 bits, so the range is-2,147,483,648 ~ 2,147,483,647. The highest bit is the symbol bit,
So the binary value of 3 is 000000000 00000000 00000000 100000011b,
So the inverse is 11111111 11111111 11111111 11111100b, that is ~ (Sizeof (INT)-1) Value
Then analyze (sizeof (n) + sizeof (INT)-1), because sizeof (n)> = 1, so (sizeof (n) + sizeof (INT)-1)> = 4,
So there is a conclusion that _ intsizeof (n) is at least 4.
Next, determine the range. When the length of N is 1 ~ The value of (sizeof (n) + sizeof (INT)-1) is 4 ~ 7
So the value is 4 ~ 7 hours,
4 => 100b
5 => 101b
6 => 110b
7 => 111b
So 4 ~ 7 and ~ (Sizeof (INT)-1) bitwise AND time, there are 100b, that is, 4,
So it is concluded that the length of N is 1 ~ When 4, the value of _ intsizeof (n) is always 4,
Similarly, promote the value to 5 ~ When 8, the value of _ intsizeof (n) is always 8,
So the length of N is (4x-3 )~ (4x), the value of _ intsizeof (n) is always 4x, and X is a positive integer, so the value of _ intsizeof (n) is always an integer multiple of 4,
Note: here, why not replace sizeof (INT) with 4 directly) the Int length varies with machines. if the machine is 16 bits, the int value is 2 bytes, and the value of _ intsizeof is an integer multiple of 2.