Home > Software engineering >  Understanding time/space complexity for Leetcode 1366
Understanding time/space complexity for Leetcode 1366

Time:11-07

I've solved leetcode 1366 but having some trouble approximating the proper time/space complexity. I've looked at the solutions tab but answers vary and I'm not sure if I buy some of them. Here is my code and the general algorithm -

class Solution:
    def rankTeams(self, votes: List[str]) -> str:
    
        rankings = {}
        team_count = len(votes[0])
        
        for vote in votes: 
            for position, team in enumerate(vote): 
                if team not in rankings:
                    rankings[team] = [0 for _ in range(team_count)]
                
                rankings[team][position]  = 1
        
        alphabetically_ordered_teams = sorted(rankings.keys()) 
        
        final_ranking = sorted(alphabetically_ordered_teams, key = lambda team : rankings[team], reverse=True)
        
        return ''.join(final_ranking)

Algorithm:

Iterate through the array of teams and build a hashmap where each team maps to an array, where array[i] is the number of votes they received at that position

For example, for the case ["ABC"], the map would look like:

{
  A: [1, 0, 0]
  B: [0, 1, 0]
  C: [0, 0, 1]
}

Sort the hashmap's keys in alphabetical order. Such as ["A", "B", "C"]

Sort this alphabetically ordered list using each element's value in the hashmap we built earlier. This way we compare their values, which is an array of votes at each position, and if there is a tie, then we can rely on alphabetical ordering to get the order we want. We sort in reverse because the result should be descending.


I think time and space complexities should be O(N) and O(1), respectively, but I'm not sure.

Iterating through an array of strings will be O(N * 26) because the constraints state that votes[i] is <= 26, which is the number of lowercase letters a->z. Even if we allowed more characters, it would always be constant as there's a constant number of unicode characters.

Sorting the hashmap's keys would be an O(1) operation in both time and space because there can only be 26 keys.

The last sorting operation: I think the number of elements in the hashmap's values is equal to the number of characters that a given vote can have. So for "ABC", there will only be 3 elements in the array that corresponds to 3 votes for each position such as '[0, 0, 0']. Thus I think the sort will also be a constant time/space operation.

I think the lambda function as the sort key would have to compare the elements of the two value arrays being sorted. Is that O(N)^2? I guess O(26)^2 in this case? Or O(1) overall.

Thus my overall time complexity was something like O((N * 26) 26log26 * 26^2) => O(N) and space complexity O(1).

Is this wrong?

CodePudding user response:

When you determine the asymptotic complexity of the time or space used by actual code, you decide which values to treat as variables or as constants. In this case if you make this determination based on constraints of the leetcode problem

1 <= votes.length <= 1000
1 <= votes[i].length <= 26

then the possible data size is totally bounded by constants so under this formalization the running time is O(1), but that isn't particularly useful. It is usually more useful to figure out time complexity letting everything be a variable i.e. we can imagine running your algorithm over data in which there can be arbitrarily many teams, not just 26; we would just need a new input format. So let's let the number of teams be a variable as well as the number of voters.

If n is the number of teams and m is the number of voters, building the hashmap is O(n*m) time. Each time you sort the teams it will be O(n log(n)), assuming we use a comparison based sort. You will sort the teams at most m 1 times: once for each voter plus once alphabetically. This means the time complexity is O(m * n * log(n)) -- we can ignore the 1 constant because it just leads to an extra n log(n) term which is dwarfed by the whole running time and we can ignore building the hash table because it only happens once and is also dwarfed by the whole running time.

Space complexity is dominated by what the hashtable uses; it's O(m*n) in size.

  • Related