I am trying to convert a string that is only 1s and 0s to a decimal value. The variable value
is initialized to 0 and is never updated. I suspect the problem is that binaryString[i]
is treated as a string and therefore the athematic function doesn't work. How can I fix this?
void binaryToDec(string binaryString, int value)
{
int binaryStringLength = binaryString.length();
for (int i = 0; i < binaryStringLength; i )
{
value = pow(2,i) binaryString[i];
}
}
I tried to use basic type casting like int(binaryString[i])
but that doesn't work.
CodePudding user response:
Firstly binaryString[i]
is a character, not an integer. To convert a digit character to an integer you can just subtract '0'
.
binaryString[i] - '0'
Secondly pow(2,i)
returns a floating point number, when you want an integer. This is inefficient, and even more seriously might be subject to rounding errors. Instead you should use a shift operator, which efficiently and accurately calculates integer powers of two.
1 << i
Thirdly you have
where you need *
. The two terms should be multiplied not added.
Putting all that together you get
value = (1 << i) * (binaryString[i] - '0');
But the most serious error of all is that your function does not return a value. It should look like this
int binaryToDec(string binaryString)
{
int value = 0;
...
return value;
}
Your version passes value
as a parameter, that's the wrong way round, binaryString
is a parameter, but value
should be returned from the function. Not sure why but this is a difference a lot of newbies struggle with.
CodePudding user response:
You could construct a bitset from the string and then back to a ullong
.
But, limiting the solution to a maximum binary string size.
#include <bitset>
#include <fmt/core.h>
#include <stdexcept> // out_of_range
auto binaryToDec(const std::string& binary) {
if (binary.size() > 64) {
throw std::out_of_range{ "binary string is too big" };
}
return std::bitset<64>{binary}.to_ullong();
}
int main() {
try {
std::string binary_1{"101010"};
fmt::print("binary: {}, decimal: {}\n", binary_1, binaryToDec(binary_1));
std::string binary_2{
"1111""1111""1111""1111"
"1111""1111""1111""1111"
"1111""1111""1111""1111"
"1111""1111""1111""1111"
"1"
};
fmt::print("{}\n", binaryToDec(binary_2));
} catch (const std::exception& e) {
fmt::print("Error: {}.\n", e.what());
}
}
// Outputs:
//
// binary: 101010, decimal: 42
// Error: binary string is too big.