Home > Back-end >  I have a question about merge sort algorithm
I have a question about merge sort algorithm

Time:11-29

I've looked at the merge sort example code, but there's something I don't understand.

void mergesort(int left, int right)
{
    if (left < right)
    {
        int sorted[LEN];
        int mid, p1, p2, idx;

        mid = (left   right) / 2;

        mergesort(left, mid);
        mergesort(mid   1, right);

        p1 = left;
        p2 = mid   1;
        idx = left;

        while (p1 <= mid && p2 <= right)
        {
            if (arr[p1] < arr[p2])
                sorted[idx  ] = arr[p1  ];
            else
                sorted[idx  ] = arr[p2  ];
        }

        while (p1 <= mid)
            sorted[idx  ] = arr[p1  ];

        while (p2 <= right)
            sorted[idx  ] = arr[p2  ];

        for (int i = left; i <= right; i  )
            arr[i] = sorted[i];
    }
}

In this code, I don't know about while loops that increase p1 and p2 respectively.

In detail, This code inserts p1, p2 in order into the 'sorted array', but I don't understand why the array is sorted in ascending order.

I looked at many explanations on the internet, but couldn't find a clear answer.

I would appreciate it if you could write your answer in detail so that I can understand it.

CodePudding user response:

why the array is sorted in ascending order

Merge sort divides an array of n elements into n runs of 1 element each. Each of those single element runs can be considered to be sorted since they only contain a single element. Pairs of single element runs are merged to create sorted runs of 2 elements each. Pairs of 2 element runs are merged to create sorted runs of 4 elements each. The process continues until a sorted run equal the size of the original array is created.

The example in the question is a top down merge sort, that recursively splits the array in half until a base case of a single element run is reached. After this, merging follows the call chain, depth first left first. Most libraries use some variation of bottom up merge sort (along with insertion sort used to detect or create small sorted runs). With a bottom up merge sort, there's no recursive splitting, an array of n elements is treated as n runs of 1 element each, and starts merging even and odd runs, left to right, in a merge pass. After ceiling(log2(n)) passes, the array is sorted.

The example code has an issue, it allocates an entire array on the stack for each level of recursion which will result in stack overflow for large arrays. The Wiki examples are better, although the bottom up example should swap references rather than copy the array.

https://en.wikipedia.org/wiki/Merge_sort

CodePudding user response:

I'm a developer working in the field. I was surprised to see you embodying merge sort.

Before we start, the time complexity of the merge sort is O(nlogn). The reason can be found in the merge sort process!

First, let's assume that there is an unordered array.

Merger sorting process:

  1. Divide it into an array of 1 size by the number of size of the array.
  2. Create an array that is twice the size of the divided array.
  3. Compare the elements of the two divided arrays and put the smaller elements in order in the created array.
  4. Repeat this process until it reaches the size of the original array.

merge sort img

There is a reason why the time complexity of the merge sort is O(nLogn).

In this process, the time complexity of log is obtained because the array is continuously divided by half, and the time complexity of nlogn is obtained because the process is performed by a total of n times.

  • Related