Home > Blockchain >  Analyze the time cost of the following algorithms using Θ notation
Analyze the time cost of the following algorithms using Θ notation

Time:10-08

So many loops, I stuck at counting how many times the last loop runs. I also don't know how to simplify summations to get big Theta. Please somebody help me out!

    int fun2(int n) {
        int sum = 0
        for (int i = n; i > 0; i--) {
                for (int j = i; j < n; j *= 2) {
                        for (int k = 0; k < j; k  ) {
                                sum  = 1
                        }
                }
        }
        return sum
    }

CodePudding user response:

  1. The outer loop for (int i = n; i > 0; i--) runs n times.
  2. The middle loop runs from (i to n), but, as it takes exponential steps, it does that in O(log(n)) time
  3. The innermost loop is, again, linear, as k runs from 0 to j, where j can be as large, as n.

The Big-O (upper bound) will be O(n²log(n)), and Θ is the same, as the lower bound is the same as the upper bound.

CodePudding user response:

Any problem has 2 stages:

  1. You guess the answer
  2. You prove it

In easy problems, step 1 is easy and then you skip step 2 or explain it away as "obvious". This problem is a bit more tricky, so both steps require some more formal thinking. If you guess incorrectly, you will get stuck at your proof.


The outer loop goes from n to 0, so the number of iterations is O(n). The middle loop is uncomfortable to analyze because its bounds depend on current value of i. Like we usually do in guessing O-rates, let's just replace its bounds to be from 1 to n.

    for (int i = n; i > 0; i--) {
            for (int j = 1; j < n; j *= 2) {
                perform j steps
            }
    }

You can see that the run-time of this new middle loop is approximately 2*n, which is O(n). Together with outer loop, you get O(n²). This is my guess.

I edited the code, so I may have changed the O-rate when I did. So I must now prove that my guess is right.


To prove this, use the "sandwich" technique - edit the program in 2 different ways, one which makes its run-time smaller and one which makes its run-time greater. If you manage to make both new programs have the same O-rate, you will prove that the original code has the same O-rate.

Here is a "smaller" or "faster" code:

    do n/2 iterations; set i=n/2 for each of them {
            do just one iteration, where you set j = i {
                perform j steps
            }
    }

This code is faster because each loop does less work. It does something like n²/4 iterations.

Here is a "greater" or "slower" code:

    do n iterations; set i=n for each of them {
            for (int j = 1; j <= 2 * n; j *= 2) {
                perform j steps
            }
    }

I made the upper bound for the middle loop 2n to make sure its last iteration is for j=n or greater.

This code is slower because each loop does more work. The number of iterations of the middle loop (and everything under it) is 1 2 4 ... n 2n, which is something like 4n. So the number of iterations for the whole program is something like 4n².

We got, in a somewhat formal manner:

n²/4 ≤ runtime ≤ 4n²

So runtime = O(n²).


Here I use O where it should be Θ. O is usually defined as "upper bound", while sometimes it means "upper or lower bound, depending on context". In my answer O means "both upper and lower bound".

  • Related