Suppose you have two square matrices, m1
and m2
, of size n X n
. The algorithm iteratively runs an O(n^2)
operation on them while halving the matrices at each iteration. Something like the pseudo-code below:
while m1.width > 0:
u = m1 & m2 # pairwise logical and operation (O(n^2))
m1 = halve(m1)
m2 = halve(m2)
Since the loop is repeated log(n)
times, O(n^2.log(n))
is an upper bound for the algorithm's time complexity. I am looking for a tighter upper bound. Any ideas?
I expect a tighter upper bound based on the premise that the size of the matrices for which O(n^2)
is estimated is logarithmically shrinking.
CodePudding user response:
With a loop like this, it’s often a good idea to add up the work done in each iteration rather than to multiply the work done by one iteration by the number of iterations. After all, the work done per iteration is changing.
Leaving out the O notation for a moment, your first iteration does roughly n^2 work, both to compute the AND and to halve the matrix size. Now, how much work does the second iteration do? Well, since your matrices are half as big as before in each dimension, the work done to AND them is roughly (n / 2)^2 = n^2 / 4. When you halve the matrices again, the next AND does (n / 4)^2 = n^2 / 16 work. More generally, the kth time you go through the loop, your AND operation will do n^2 / 4^k work.
Adding this up across all iterations of the loop gives us that the total work done is
n^2 / 4^0 n^2 / 4^1 n^2 / 4^2 …
= n^2 · (1 / 4^0 1/4^1 1/4^2 …)
That latter bit is the sum of a geometric series, and it adds up to 4/3. So that gives us that the total work done is at my 4n^2 / 3, which is O(n^2).
Another way to see this: while your code is written iteratively, you could imagine thinking of it as a recursive algorithm that does O(n^2) work, then runs again on an input of size n / 2. That gives the recurrence
T(n) = T(n / 2) O(n^2).
The Master Theorem then tells you that (with a = 1, b = 2, d = 2, and log_b a < d) that the solution is O(n^2).
CodePudding user response:
It's not clear what "halve" is, but assuming it halves n (rather than halves number of elements in the matrix), this calculation shows O(n^2): n^2 (n/2)^2 (n/4)^2 ... = n^2 * (1 1/4 1/16 ...) = 4n^2/3.
If it halves the number of elements in the matrix, then you have this calculation (which also gives O(n^2)): n^2 n^2/2 n^2/4 ... = n^2(1 1/2 1/4 1/8 ...) = 2n^2.
The common trick, used here, in computing complexities for halving algorithms is to take the sum to infinity rather than stop when n/2^k gets to 1. This happens to give a tight upper bound.