Home > Enterprise >  Does manipulating n have any impact on the O of an algorithm?
Does manipulating n have any impact on the O of an algorithm?

Time:03-20

Does manipulating n have any impact on the O of an algorithm?

recursive code for example:

Public void Foo(int n)
{
  n -= 1;
  if(n <= 0) return;
  n -= 1;
  if(n <= 0) return;

  Foo(n)
}

Does the reassignment of n impact O(N)? Sounds intuitive to me...

Does this algorithm have O(N) by dropping the constant? Technically, since it's decrementing n by 2, it would not have the same mathematical effect as this:

public void Foo(int n) // O(Log n)
{
  if(n <= 0) return;
  Console.WriteLine(n);
  Foo(n / 2);
}

But wouldn't the halving of n contribute to O(N), since you are only touching half of the amount of n? To be clear, I am learning O Notation and it's subtleties. I have been looking for cases such that are like the first example, but I am having a hard time finding such a specific answer.

CodePudding user response:

The reassignment of n itself is not really what matters when talking about O notation. As an example consider a simple for-loop:

for i in range(n):
   do_something()

In this algorithm, we do something n times. This would be equivalent to the following algorithm

while n > 0:
   do_something()
   n -= 1

And is equivalent to the first recursive function you presented. So what really matters is how many computations is done compared to the input size, which is the original value of n. For this reason, all these three algorithms would be O(n) algorithms, since all three of them decreases the 'input size' by one each time. Even if they had increased it by 2, it would still be a O(n) algorithm, since constants doesn't matter when using O notation. Thus the following algorithm is also a O(n) algorithm.

while n > 0:
    do something()
    n -= 2

or

while n > 0:
    do_something()
    n -= 100000

However, the second recursive function you presented is a O(log n) algorithm (even though it does not have a base case and would techniqually run till the stack overflows), as you've written in the comments. Intuitively, what happens i that when halving the input size every time, this exactly corresponds to taking the logarithm in base two of the original input number. Consider the following:

n = 32. The algorithm halves every time: 32 -> 16 -> 8 -> 4 -> 2 -> 1. In total, we did 5 computations. Equivalently log2(32) = 5.

So to recap, what matters is the original input size and how many computations is done compared to this input size. Whatever constant may affect the computations does not matter.

If I misunderstood your question, or you have follow up questions, feel free to comment this answer.

  • Related