Binary integer to decimal, binary integer to decimal
I want to solve the problem of OJ, But I can use bitwise operations to calculate it. but note that the numeric constants in C language may overflow when the default type is int displacement.
Description
Returns a binary non-negative integer x, x <232, which is converted to a decimal number.
Input is multiple rows. Each row has a binary non-negative integer x. Output returns the decimal value corresponding to x in each row. Sample Input0 1 01 10 11 100001 1111111111111111 Sample Output0 1 1 2 3 33 65535 HINT
Pay attention to the data range !!!
1 # include "stdio. h "2 # include" string. h "3 const int maxn = 100000; 4 int main (int argc, char const * argv []) 5 {6 char indata [maxn]; 7 memset (indata, 0, sizeof (indata); 8 while (scanf ("% s", indata )! = EOF) 9 {10 int n = strlen (indata); 11 unsigned long sum = 0; 12 for (int I = n-1; I> = 0; I --) 13 {14 unsigned long k = indata [I]-'0'; 15 sum + = (k <(n-i-1 )); 16} 17 printf ("% llu \ n", sum); 18} 19 return 0; 20}