Home > Software design >  Why is the wrong binary number displayed?
Why is the wrong binary number displayed?

Time:12-18

Code:

#include <stdio.h>
#include <stdlib.h>


int main()
{
    long int x;
    x = 1000000;
    printf("%ld\n", x);
    for(int i = 0; i < 32; i  )
    {
        printf("%c", (x & 0x80) ? '1' : '0');
        x <<= 1;
    }

    printf("\n");
    return 0;
}

This code is supposed to convert a decimal int to binary, but why doesn't it work correctly?

P.S. I solved this problem by replacing 0x80 with 0x80000000. But why was the wrong number displayed at 0x80?

CodePudding user response:

EDIT2:
OP asks "P.S. I solved this problem by replacing 0x80 with 0x80000000. But why was the wrong number displayed at 0x80?"

What was wrong was 0x80 is equal to 0x00000080. 0x80 will never test any bits above b7 (where bits, right to left, are numbered b0 to b31.

The corrected value, 0x80000000, sets the MSB high and can be used (kind of) to 'sample' each bit of the data as the data value is 'scrolled' to the left.
//end edit2

Two concerns:
1) Mucking with the sign bit of a signed integer can be problematic
2) "Knowing" there are 32 bits can be problematic.

The following makes fewer presumptions. It creates a bit mask (only the MSB is set in an unsigned int value) and shifts that mask toward the LSB.

int main() {
    long int x = 100000;
    printf("%ld\n", x);

    for( unsigned long int bit = ~(~0u >> 1); bit; bit >>= 1 )
        printf("%c", (x & bit) ? '1' : '0');

    printf("\n");

    return 0;
}
100000
00000000000000011000011010100000

Bonus: Here is a version of the print statement that doesn't involve branching:

printf( "%c", '0'   !!(x & bit) );

EDIT:
Having seen the answer by @Lundin, the suggestion to insert SP's to improve readability is an excellent idea! (Full credit to @Lundin.)

Below, not only is the long string of bits output divided into "hexadecimal" chunks, but the compile time value is shown in a way to easily see it is 10million. (1e7 would have done, too.)

A new-and-improved version:

#include <stdio.h>
#include <stdlib.h>

int main() {
    long int x = 10 * 1000 *1000;
    printf("%ld\n", x);

    for( unsigned long int bit = ~(~0u >> 1); bit; bit >>= 1 ) {
        putchar( '0'   !!(x & bit) );
        if( bit & 0x11111111 ) putchar( ' ' );
    }

    putchar( '\n' );

    return 0;
}
10000000
0000 0000 1001 1000 1001 0110 1000 0000

CodePudding user response:

1000000 dec = 11110100001001000000 bin.
80 hex = 10000000 bin.
And this doesn't make much sense at all:

  11110100001001000000
&             10000000

Instead fix the loop body to something like this:

#include <stdio.h>
#include <stdlib.h>

int main (void)
{
    long int x;
    x = 1000000;
    printf("%ld\n", x);
    for(int i = 0; i < 32; i  )
    {
        unsigned long mask = 1u << (31-i);
        printf("%c", (x & mask) ? '1' : '0');
        if((i 1) % 8 == 0) // to print a space after 8 digits
          printf(" ");
    }

    printf("\n");
    return 0;
}

CodePudding user response:

Without using an integer counter to see what digit is at the ith position, you can instead use an unsigned variable which is equal to 2^i at the ith iteration. If this variable is unsigned, when it overflows it will become zero. Here is how the code would look like. It displays the number in reversed order (first position means the coefficient of 2^0 in the polynomial decomposition of the number).

int
main()
{
    int x;
    x = 1000000;
    printf("%lx\n", x);
    for(unsigned b = 1; b; b<<=1)
      printf("%c", x & b ? '1':'0');
    printf("\n");
    return 0;
}

CodePudding user response:

I would use functions

void printBin(long int x)
{
    unsigned long mask = 1UL << (sizeof(mask) * CHAR_BIT - 1); 
    int digcount = 0;
    while(mask)
    {
        printf("%d%s", !!(x & mask),   digcount % 4 ? "" : " ");
        mask >>= 1;
    }
}

int main(void)
{
    printBin(0); printf("\n");
    printBin(1); printf("\n");
    printBin(0xf0); printf("\n");
    printBin(-10); printf("\n");
}
  • Related