Home > Back-end >  Why is the big O-notation for the heap sort algorithm O(n logn)?
Why is the big O-notation for the heap sort algorithm O(n logn)?

Time:11-11

The height of the binary tree keeps reducing every time we remove a root element. Why do we use n (the total number of elements) times logn (the amount of swaps every time we remove a root element) to calculate the total time complexity while the amount of swaps actually varies depending on how many elements remain?

It seems like the correct expression of the time complexity should be the sum of swaps happened for every iteration of root element removal. The amount of swaps would be log(i). i is the remaining elements, and log(i) is the binary tree depth/time complexity/amount of swaps for the removal of each element. i ranges from 1 to n. The sum from log(1) to log(n) would just be O(log(n!)), which is smaller than the expected complexity O(nlog(n)). I would really appreciate it if anyone can explain this.

CodePudding user response:

There are at least two ways to look at this:

A complete binary tree has O(

  • Related