Home > Software engineering >  Making the complexity smaller (better)
Making the complexity smaller (better)

Time:03-18

I have an algorithm that looks for the good pairs in a list of numbers. A good pair is being considered as index i being less than j and arr[i] < arr[j]. It currently has a complexity of O(n^2) but I want to make it O(nlogn) based on divide and conquering. How can I go about doing that?

Here's the algorithm:

def goodPairs(nums):
    count = 0
    for i in range(0,len(nums)):
        for j in range(i 1,len(nums)):    
            if i < j and nums[i] < nums[j]:
                count  = 1
                j  = 1
            j  = 1
    return count

Here's my attempt at making it but it just returns 0:

def goodPairs(arr):
    count = 0 
    if len(arr) > 1:
         # Finding the mid of the array
        mid = len(arr)//2
  
        # Dividing the array elements
        left_side = arr[:mid]
  
        # into 2 halves
        right_side = arr[mid:]
  
        # Sorting the first half
        goodPairs(left_side)
  
        # Sorting the second half
        goodPairs(right_side)

        for i in left_side:
            for j in right_side:
                if i < j:
                    count  = 1
    return count

CodePudding user response:

One of the most well-known divide-and-conquer algorithms is merge sort. And merge sort is actually a really good foundation for this algorithm.

The idea is that when comparing two numbers from two different 'partitions', you already have a lot of information about the remaining part of these partitions, as they're sorted in every iteration.

Let's take an example!

Consider the following partitions, which has already been sorted individually and "good pairs" have been counted.

Partition x: [1, 3, 6, 9].

Partition y: [4, 5, 7, 8].

It is important to note that the numbers from partition x is located further to the left in the original list than partition y. In particular, for every element in x, it's corresponding index i must be smaller than some index j for every element in y.

We will start of by comparing 1 and 4. Obviously 1 is smaller than 4. But since 4 is the smallest element in partition y, 1 must also be smaller than the rest of the elements in y. Consequently, we can conclude that there is 4 additional good pairs, since the index of 1 is also smaller than the index of the remaining elements of y.

The exact same thing happens with 3, and we can add 4 new good pairs to the sum.

For 6 we will conclude that there is two new good pairs. The comparison between 6 and 4 did not yield a good pair and likewise for 6 and 5.

You might now notice how these additional good pairs would be counted? Basically if the element from x is less than the element from y, add the number of elements remaining in y to the sum. Rince and repeat.

Since merge sort is an O(n log n) algorithm, and the additional work in this algorithm is constant, we can conclude that this algorithm is also an O(n log n) algorithm.

I will leave the actual programming as an exercise for you.

CodePudding user response:

@niklasaa has added an explanation for the merge sort analogy, but your implementation still has an issue.

You are partitioning the array and calculating the result for either half, but

  1. You haven't actually sorted either half. So when you're comparing their elements, your two pointer approach isn't correct.
  2. You haven't used their results in the final computation. That's why you're getting an incorrect answer.

For point #1, you should look at merge sort, especially the merge() function. That logic is what will give you the correct pair count without having O(N^2) iteration.

For point #2, store the result for either half first:

# Sorting the first half
leftCount = goodPairs(left_side)
  
# Sorting the second half
rightCount = goodPairs(right_side)

While returning the final count, add these two results as well.

return count   leftCount   rightCount
  • Related