Sedgewicks Insertion Sort (Java):
int N = a.length; for (int i = 1; i < N; i ){ for (int j = i; j > 0 && less(a[j],a[j-1]); j--) exchange(a, j, j-1); } }
Or this one (the difference is in the exchange part, Sedgewick exchanges [j] with [j-1] each iteration, whereas the following saves the moved value to a temp, moves [j-1] to [j] on each iteration and after it exits the inner loop, then it places the original value stored in temp, to its final position [j]):
for(int i = 1 ; i < arr.length ; i ){
int j = i;
int temp = arr[j];
while(j >= 1 && temp < arr[j-1]){
arr[j] = arr[j-1];
j--;
}
arr[j] = temp;
}
Is there any substantial difference? Are they roughly the same? Am I overthinking to much?
CodePudding user response:
Is there any substantial difference? Are they roughly the same? Am I overthinking to much?
They seem to both have a nested loop and consequently O(n^2)
runtime complexity. They are roughly the same. The differences are so minor, nobody cares about them in practice.
If you want to gain an understanding about what matters for sorting problems and why and be able to compare them you can look into this article
In reality, what matters for a sorting algorithm is its asymptotic runtime complexity, which is called Big-O
and denoted like this O(f(x))
.
It shows how fast/slow the algorithm is for BIG inputs. Nobody cares about small inputs, because for them every algorithm is fast.
CodePudding user response:
From a purely observational perspective, they are equivalent.
- for sorted lists in ascending order, they each perform
0
swaps. - for completely reversed lists, sorting in ascending order they perform the same number of swaps. For
N
items this would beN*(N-1)/2
and is the maximum possible, given the loop parameters sans any item comparisons. - for repeated sorting of randomly generated values they both perform the same number of swaps.