I would be really glad if any one could help me out...
I should write a program, that prints an integer in binary representation without using loops. It also needs to print out the number in 32 bits representation. For small numbers (e.g. 256 still works) the code below works fine. For big numbers i don't get the expected output.
Does anybody see how I can fix this?
Thanks a lot!
#include <inttypes.h>
#include <stdio.h>
#include <stdlib.h>
void
print_reverse(uint32_t value)
{
if (value % 10 == 0) {
if (value == 0) {
printf("1");
return;
} else {
printf("0");
print_reverse(value / 10);
}
} else {
if (value == 1) {
return;
} else {
printf("1");
print_reverse((value - 1) / 10);
}
}
return;
}
int
print_binary1(uint32_t value, uint32_t binary1, int zeros1)
{
int rest, binary, zeros;
if (value == 0) {
zeros = zeros1;
printf("%0*d", 32 - zeros, 0);
return binary1;
} else {
rest = value % 2;
if (rest >= 1) {
binary = binary1 * 10 1;
zeros = zeros1 1;
return print_binary1((value - 1) / 2, binary, zeros);
} else {
binary = binary1 * 10;
zeros = zeros1 1;
return print_binary1(value / 2, binary, zeros);
}
}
}
void
print_binary(uint32_t value)
{
printf("%" PRIu32, value);
if (value == 0) {
printf(" = 0b00000000000000000000000000000001");
} else {
printf(" = 0b");
int binary = print_binary1(value, 1, 0);
print_reverse(binary);
}
}
int
main(void)
{
uint32_t value;
printf("value: ");
if (scanf("%" SCNu32, &value) != 1) {
fprintf(stderr,
"ERROR: While reading the 'uint32_t' value an error occurred!");
return EXIT_FAILURE;
}
printf("\n");
print_binary(value);
printf("\n");
return EXIT_SUCCESS;
}
CodePudding user response:
First of all, it's impossible to do using loops unless you have 32 versions of virtually the same line of code.
For example, the code in question uses recursion to achieve looping. (In fact, any loop can be achieved using recursion.) But it's still looping (executing the same code repeatedly).
The simplest way to achieve what you want while hiding the looping would be to use snprintf
.
The rest answers why the code doesn't work for larger numbers.
It looks like this produces what I previously called "decimal-coded binary" (in reference to binary-coded decimal), where each digit of the decimal representation represents a bit. For example, it converts thirteen (1101 base 2) to one thousand one hundred and one (1101 base 10).
If so, the largest input the program supports is 1,1111,1111 (base 2) because 10,0000,0000 (base 10) doesn't fit in a uint32_t
. Put differently, a 32 bit output supports a floor(log10(2^32-1)) = 9 bit input. This means the program support number from 0 to 511, but not beyond.
By switching to a 64 bit output, you could support a floor(log10(2^64-1)) = 19 bit input, which is still not enough for want you want to do.
You need a different approach.