To input a decimal number, the number of '1' after the output decimal number is converted to binary number, it seems that the question is not difficult, but how can we achieve this efficiently?
Test-1
If a decimal number is divided by 2, the corresponding binary number is reduced by one digit. After dividing by 2, the remainder is the value corresponding to the binary number.
For example:
Decimal number: 7 (0111)
7% 2 = 1,
7/2 = 3 (011)
Decimal number: 3 (011)
3% 2 = 1,
3/2 = 1 (01)
Decimal number: 1 (01)
1% 2 = 1,
1/2 = 0 (0)
The number of '1' is 3.
//test-1int fun_1(int n){int sum=0;while(n){if(n%2==1){sum++;}n/=2;}return sum;}
Test-2
Shifts the binary number right by bit to achieve the Division effect. Therefore, you can obtain the number of '1' according to the following steps.
// Test-2int fun_2 (int n) {int sum = 0; while (n) {sum + = N & 0x01; n> = 1; // shift n to 7, (0111) Right, move one digit to the right each time, and shift to the right can also achieve the goal of Division} return sum ;}
The bitwise and of n first and 0x01 is shown above. If N second bit is 1, the result of bitwise AND is 1, sum + = 1;
Then, determine N based on the position of displacement.
PS: bitwise AND and shift are based on the binary number (that is, the binary number corresponding to the decimal number.
Test-3
In the following cases, we do not want to shift multiple times,
Example: 1024 (100 0000 0000)
How can we efficiently find the number of '1' in numbers like this?
// Test-3int fun_3 (int n) {int sum = 0; while (n) {N & = (n-1 ); // calculate only the number of '1' in N, sum ++;} return sum ;}
Example of the above method:
Decimal value: 12 (1100)
The first time is: N = (1100) & (1011); n = 8;
The second time is: N = (1000) & (0111); n = 0;
That is, only the number of '1' in N is performed.
Extended questions:
Given two positive integers A and B, how many digits are different between integer a and integer B binary numbers?
// How many digits are different between A and B? # include <iostream> using namespace STD; int fun (INT N, int m) {int sum = 0; while (N | M) {sum + = (0x01 & N) ^ (0x01 & M); N >>= 1; m> = 1;} return sum;} int main () {int n, m; CIN> N> m; int sum = fun (n, m ); cout <sum <Endl; System ("pause"); Return 0 ;}
The beauty of programming --- determine the number of 1 in binary