(X & Y) + (x ^ y)> 1) equals to (x + y)/2
(X & Y) + (x ^ y)> 1), divides each bit (Binary bit) of X and Y into three types, calculate the average value for each category and summarize the results.
Among them, the first type is X, and the corresponding bits of Y are all 1. The average value of X & Y is calculated;
The first type is X, and the corresponding bits in Y have only one bits, and the average value is calculated using (x ^ y)> 1;
The other is X, where the corresponding bits in Y are both 0 and do not need to be calculated.
Next I will explain how the first two cases are calculated:
The corresponding bits of X and Y are both 1, and then divided by 2 or the original number. For example, if two 00001111 are added and divided by 2, 00001111 is still obtained. This is the first part.
In the second part, the corresponding bit has only one digit as 1 and is extracted using the "exclusive or" operation. Then> 1 (Shifts one digit right, which is equal to dividing by 2 ), that is, the average value from the second part.
In the third part, the corresponding bits are all zero. Because the sum is divided by two or 0, no calculation is required. After the three parts are summarized, (X & Y) + (x ^ y)> 1)
By the way, we can avoid overflow.
Assume that X and Y are unsigned char data (0 ~ 255, occupies one byte). Obviously, the average of X and Y is also 0 ~ Between 255, but if x + y is used directly, the result may exceed 255, which results in overflow. Although the final result is within 255, the overflow must be handled in the process, in assembly, you need to consider this high overflow situation. If (X & Y) + (x ^ y)> 1), the calculation will not.
(X & Y) + (x ^ y)> 1) calculate the average values of X and Y.