Home > Blockchain >  Simplification of Multi-variable Big-O Time Complexity
Simplification of Multi-variable Big-O Time Complexity

Time:03-22

I am attempting to calculate the Time Complexity of a function that sorts two arrays using Merge sort, finds their intersection and sorts the result from this intersection.

By analyzing the steps involved, I found the key steps to have time complexities of

O(a log a) O(b log b) O(a b) O((a b)log(a b))

where a is the size of the first array and b is the size of the second array

I further simplified this down to: O(a log a) O(b log b) O((a b)log(a b))

This is where I got stuck. But from what I have read, the idea of a b being greater than both a and b allow me to remove the terms a log a and b log b. Using this, I got the overall Big O Notation as O((a b)log(a b)).

I am not sure if this is entirely correct though.

CodePudding user response:

Your analysis is correct. Let me explain it more explicitly step by step if you are not sure about it.

Overall time complexity is: O(alog(a)) O(blog(b)) O(alog(a b) blog(a b)).
Logarithm is a strictly increasing function on x > 0, that is, log(x) > log(y) iff x > y > 0.
Input size cannot be negative, therefore for our analysis we can use this fact.
Using this property, we know that log(a b) > log(a), therefore alog(a b) > alog(a).
We can conclude that O(a log(a)) is a subset of O(a log(a b)) since every algorithm upper-bounded by alog(a) as input size goes to infinity is also upper-bounded by a log(a b).
Therefore we can get rid of O(a log(a)).
Apply the same logic for O(b log(b)).
At the end, we have O(alog(a b) blog(a b)), which corresponds to O((a b)log(a b)).

  • Related