Home > other >  Best Case and Worst Case of an Algorithm: When are they considered to be the "same"?
Best Case and Worst Case of an Algorithm: When are they considered to be the "same"?

Time:02-09

I am trying to determine the Best Case and Worst Case of the algorithm below but I have been really confused since our professor claims that the Best Case and Worse Case of the algorithm are the "Same"


    import math
    def sum(Arr, left, right):
        if(left>right):
            return 0
        elif(left == right):
            return Arr[left]
        else:
            mid = left  math.floor((right-left)/2)
            leftSum = sum(Arr, left, mid)
            rightSum = sum(Arr, mid 1, right)
            return leftSum   rightSum;
    def findSum(Arr):
        return sum(Arr, 0, len(Arr)-1)
    
    
    arr = [1,2,3,4,5,7,8,9]
    print(findSum(arr));

The algorithm finds the sum of an array by divide-and-conquer, and here were my initial thought:

Best Case: The input array only has 1 element, hence it would meet the second condition and return Arr[left]. No recursion is involved and takes Constant Time(O(1))

Worst Case: The Input array could be arbitrary large(N elements), and each element needs to be scanned once at least, and it would take Linear Time(O(N))

I saw an explanation from another post with similar questions, and the explanation was that no matter how large/small the input array is, the algorithm has to loop over it once, so it's Linear Time for both Worse/Best Case. But I was still confused, the Best Case of the algorithm does not involve any recursion, so can we still claim that it is Linear Time(O(N)) instead of Constant Time(O(1))?

CodePudding user response:

Your understanding of time complexity is wrong and you are mixing it with best case worst case scenarios which makes it more confusing. An algorithm isn't considered constant when the input data is of length 1 and it does only one operation. The length of the input data has nothing to do with the best/worst case scenarios.

Time complexity relates the growth of the number of operations an algorithm does to its input data. So, we say an algorithm is constant when there is no relation between the input data and the number of operations. Or an algorithm is linear if the number of its operations grows linearly compared to the quantity of the input data.

The best/worst case scenarios are about the content of the data and how it's already structured. This is relevant in say, some sorting algorithms. Insertion sort for example, will only do linear work if the data is already sorted (best case) but quadratic work if it isn't. On the other hand, merge sort will do linearithmic work regardless of the sorting status of the input data.

I repeat, time complexity is the rate of growth of the number of operations an algorithm does in relation to the growth of its input data.

Now, to your example, does the data (not its quantity) have any impact on your algorithm? Does it matter for summing whether the numbers are say sorted or not? Or whether they are small or large (within reason of course)? It does not. The work is always the same. So the best and worst case become the same as your Professor says.

  •  Tags:  
  • Related