Home > database >  How did the problem instance with the parameter size m = logn become 2^(logn)?
How did the problem instance with the parameter size m = logn become 2^(logn)?

Time:08-29

Problem Instance and Big-O Notation

Hello,

I am trying to understand a solution to a problem, but I am not understanding a part of the solution as to how the Problem instance was calculated.

Here are the questions and the solutions:

"An instance of a 'Perfect' decision problem is an integer n >= 1 and the problem is to decide if n is the sum of each of its proper divisors. For example, 6 is 'Perfect' because 1 2 3 = 6."

Since the input to 'Perfect' is a single integer n, an appropriate size parameter is m = log(n), since this is roughly the number of bits needed to represent n.

"Suppose an algorithm for deciding 'Perfect' requires O(n^2) steps, where n is the problem instance. Use the answer above and Big-O growth terminology to describe the growth of the algorithm's running time"

The algorithm has exponential running time since n^2 = (2^(logn))^2 = 4^(logn)

I can't seem to understand or figure out how the problem instance with the parameter size m = logn become 2^(logn)....

CodePudding user response:

We are talking about m bits. m is equal to log(n), according to the statement.

Now every bit can be either 0 or 1.

Suppose you have to a representation of 2 bits : _ _. Each one of them can either be 0 or 1. And possible representations become 2^2=4.

Similaryly, the possible values for the above case become 2^m or 2^log(n).

CodePudding user response:

As stated by @AbhinavMathur above, the text is clearly wrong. The time to solve the problem is exponential in the number of bits.

  • Related