#include "main.h"
/**
* binary_to_uint - converts binary to unsigned int
* @b: pointing to the binary number to be converted
* Return: the converted number or
* 0 if b is NULL or b is not 0 or 1.
*/
unsigned int binary_to_uint(const char *b)
{
int i;
unsigned int dec = 0;
if (!b)
return (0);
for (i = 0; b[i]; i )
{
if (b[i] < '0' || b[i] > '1')
return (0);
dec = (2 * dec) (b[i] - '0');
}
return (dec);
}
CodePudding user response:
b[i]
contains either the character '0'
or the character '1'
otherwise the function returns 0
.
if (b[i] < '0' || b[i] > '1')
return (0);
So subtracting form the character '0'
or '1'
the character '0'
you will get the integer number either 0
or 1
.
dec = (2 * dec) (b[i] - '0');
As the string represents a binary representation of a number then dec
is multiplied by 2
instead of 10
.
CodePudding user response:
The line dec = (2 * dec) (b[i] - '0');
adds a bit to the dec
accumulator.
Multplying by 2 shifts the bits left and dividing by two shifts them to the right:
- 1 -> 0001 ; x2 = 2 -> 0010
- 2 -> 0010 ; x2 = 4 -> 0100
And that's what's (2 * dec)
doing. Essentially another way of writing dec<<1
.
b[i]
is a character which can be casted to an integer returning it's ASCII code.
Since beforehand there are checks that ensure that the string is containing either '0'
and '1'
characters and we know that the distance between these two is at most 1.
The following expression:
b[i] - '0'
is going to either be:
'1' - '0'
which will return 1'0' - '0'
which will return 0.
Adding that result to the accumulated value dec
results in appending either a 1 or a 0 to the dec
accumulator.
This has some potential issues as the length of char*
may be bigger than the amount of bits that unsigned int
may hold; causing overflow errors.
If I don't suck too much using bitwise operators in C you could rewrite that expression as follows:
dec = (dec<<1) | (b[i]-'0')