Home > database >  Can an algorithm with higher Complexity be Faster?
Can an algorithm with higher Complexity be Faster?

Time:11-30

I have written a code for a problem and used 2 double-nested loops within the implementation, but this code runs too long with big O as O(n^2).

So I googled for a faster solution for the same problem and found the second code below, which uses a tripled-nested loop with big O as O(n^3).

Is it because the number of computations is higher for the first code, although it has lower big O?

If so can I conclude that big O is not reliable for small "n" values and I have to do experimentation to be able to judge?

Code 1:

def sherlockAndAnagrams(s):
    # 1 . Traverse all possible substrings within string
    count = 0
    lst_char_poss_str = []
    len_s = len(s)
    
    for i in range(len_s):#for each char in string
        temp_str = ""#a temp string to include characters next to evaluating char
        
        for j in range(i , len_s):#for all possible length of string from that char
            temp_str  = s[j] #possible substrings from that char
            lst_char_poss_str.append(temp_str)#All possible substrings within string
    
    # 2 . Check if any two substrings of equal length are anagrams
    new_lst_char_poss_str = []

    for i in lst_char_poss_str:
        i = list(i)#sorted list, so, "abb" and "bba" will be both "abb"
        i.sort()
        new_lst_char_poss_str.append(i)#a 2-d list of lists of characters for All possible substrings within string

    len_new_s = len(new_lst_char_poss_str)

    for i in range (len_new_s - 1):
        for j in range (i   1, len_new_s):
            if new_lst_char_poss_str[i] == new_lst_char_poss_str[j]:
                count  = 1
                
    return(count)

Code 2:

def sherlockAndAnagrams(s):
    count = 0
    slen = len(s)

    for i in range(slen):
        for j in range(i 1, slen):
            substr = ''.join(sorted(s[i:j]))#Sortingall characters after a char in string
            sublen = len(substr)

            for x in range(i 1, slen):
                if x   sublen > slen: #if index out of range
                    break

                substr2 = ''.join(sorted(s[x:x sublen]))
                if substr == substr2:
                    anagrams  = 1

    return count

CodePudding user response:

There are lot of point to be consider:

  • the second algorithm always return 0, nobody increment count
  • in the first : temp_str = s[j] is not efficient, in the second this string concatenation is not used.
  • the second is faster because use slicing to retrieve pieces of the string. but to be sure maybe you must do a precise profile of the code.

other than this, as told by @pjs big O notation is an asymptotical explanation.

CodePudding user response:

You might have an algorithm whose running time is 1,000,000 n, because you may be doing some other operations. But you might have an algorithm of this running time. 1,000,000n is O (n), because this is <= some constant time n and you might have some other algorithm with the running time of 2 n^2.

You would say that 1,000,000 n algorithm is better than 2 n^2. The one with the linear running time which is O (n) running time is better than O ( n^2). It is true but in the limit and the limit is achieved very late when n is really large. For small instances this 2 n^2 might actually take less amount of time than your 1,000,000 n. We must be careful about the constants also.

  • Related