I think the time complexity of below code should be O(1) as worst case can be log 1000 base 2 or something definite. But I am not sure as it's time does vary with input and the given answer is O(n), which I am very confused about how they got that. If we increase n, function gets called fewer times so is it O(1/n)? Is it even possible?
#define LIMIT 1000
void fun2(int n)
{
if (n <= 0)
return;
if (n > LIMIT)
return;
cout<<n<<" ";
fun2(2*n);
cout<<n<<" ";
}
CodePudding user response:
#define LIMIT 1000
along with the base case of if (n > LIMIT) return;
guarantees O(1) because it puts a ceiling on the number of iterations the function can run.
Even if this was #define LIMIT 10e50
, it'd still be O(1).
Recall that Big O is concerned with theoretical growth, not with how much work is to be done in practice. If you have a cap on how much the function can grow, regardless of how large that cap may be, it's a constant time operation from a complexity perspective.
Is Big O necessarily a realistic reflection of the work the algorithm does? No. Big O is a scalability heuristic, not the final word on efficiency. All O(1) says here is that for some input > n
, you can increase n
indefinitely with no additional cost. In the real world, constant factors often matter.
CodePudding user response:
To respond to the individual points you've raised:
I think the time complexity of below code should be O(1) as worst case can be log 1000 base 2 or something definite.
Yep, that's exactly right!
But I am not sure as it's time does vary with input
You are correct that the runtime varies with the input size. However, that does not necessarily mean that the runtime is not O(1). If an algorithm's runtime is always bounded from above by some constant, regardless of what the input size is, then its runtime is O(1). Stated differently, an O(1) runtime means "without even looking at your input, I can bound how long the algorithm is going to take to run." (Technically that isn't 100% accurate, since big-O notation talks about what happens for sufficiently large inputs, but it's true in this case.)
Here's another example of this:
void sillyFunction(int n) {
for (int i = 0; i < 137 && i < n; i ) {
cout << '*' << endl;
}
}
We can guarantee that the loop will run at most 137 times regardless of what n is. However, for small values of n, we may do less work than this. But the runtime here is still O(1), since we have that bound of "at most 137 iterations."
Here's another example:
void amusingFunction(int n) {
for (int i = 137; i >= 0 && i >= n; i ) {
cout << '*' << endl;
}
}
Again, this loop is guaranteed to run at most 137 times. Here, though, the work decreases as we increase n, to the point where the loop never runs when n ≥ 137. But since we can bound the total number of loop iterations at at most 137 without even looking at n, the runtime is O(1).
Here's a trickier example:
void deviousFunction(int n) {
if (n <= 137) {
while (true) { // infinite loop!
cout << '*';
}
}
cout << "Yup." << endl;
}
This function will go into an infinite loop for any n ≤ 137. However, for sufficiently large values of n (namely, when n > 137), the algorithm always terminates immediately. This algorithm therefore has a runtime of O(1): there's a constant amount of work where, for any sufficiently large n, the algorithm does at most that much work. (This is highly contrived and I've never seen anything like this before, but you get the picture.)
and the given answer is O(n), which I am very confused about how they got that.
The runtime bound here of O(n) to me seems incorrect. It's technically not wrong to say the runtime is O(n) because that does provide a correct bound on the runtime, but it's not tight. You should ask whoever gave you this bound to explain their reasoning; perhaps there's a typo in the code or in the explanation?
If we increase n, function gets called fewer times so is it O(1/n)? Is it even possible?
As n increases, the number of recursive calls is nonincreasing, but it doesn't necessarily decrease. For example, fun2(1000)
and fun2(10000000)
each result in a total of one call being made.
It's not possible for an algorithm to have a runtime of O(1 / n) because all algorithms do at least a constant amount of work, even if that work is "set up the stack frame." A runtime bound of O(1 / n) means that, for sufficiently large n, you would be doing less than one unit of work. So in that sense, there's a difference between "the runtime drops as n gets bigger, to the point where it flattens out at a constant" and "the runtime is O(1 / n)."