If selection sort iterates over the array n times with n being the length of the array, with each iteration having 1 less comparison to make than the last (first iteration has n comparisons), how comes the complexity of selection sort is n^2 instead of n!(n factorial)?
CodePudding user response:
Typically, multiplication of runtimes arises when you perform some operation some number of times. So, for example, if I do 10 operations 5 times, then I'd do 50 total operations.
On the other hand, addition of runtimes arises when you do one thing, then do the next, etc. So, for example, if I do 10 units of work, then 5 units of work, I'd do a total of 15 units of work.
You're correct that selection sort first looks at n items, then n-1 items, then n-2 items, then n-3 items, etc. The question is how we then go from that to a total runtime. You're proposing multiplying them together:
n · (n-1) · (n-2) · ... · 2 · 1
However, this would mean "I do n things n-1 times, then do that n-2 times, then do that n-3 times, etc."
That's why instead we add them together:
n (n-1) (n-2) ... 2 1 = n(n 1)/2 = Θ(n2).