Home > Blockchain >  Can we say O(n m) is equivalent to O(n), when m is constant or when n > m
Can we say O(n m) is equivalent to O(n), when m is constant or when n > m

Time:03-25

I was solving one of the leetcode problems(Longest Palindrome), where I am

  • traversing the whole string(s) to count the frequency of each character
  • traversing the array(charFreq) to determine whether the frequency of character is odd and doing operations accordingly.

1. Implementing using array

   public int longestPalindrome(String s) {

    int[] charFreq = new int[58];
    for(char sChar : s.toCharArray()){
        charFreq[sChar - 'A'];
    }

    int charsWithOddFreq = 0;
    for(int freq : charFreq){
      if((freq & 1) != 0){
        charsWithOddFreq  ;
      }
    }

    int len = s.length();
    return (charsWithOddFreq == 0) ? len : (len - charsWithOddFreq   1);
  }

where, 1 <= s.length <= 2000
s consists of lowercase and/or uppercase English letters only.

But, is it correct to say the time complexity of this above program is O(n m), here n is the length of the given string(s) and m is the size of array taken to store frequency(charFreq) or is it better to say just O(n) as we can neglect the constant factor(m), which is not dependent on the size of the input string and the iteration for each input will always be same(i.e. 58).
In short, is O(n m) ~ O(n) in this case?

2. Implementing same using Map

public int longestPalindrome(String s) {

    Map<Character, Integer> charFreq = new HashMap<>();
    for(char sChar : s.toCharArray()) {
      charFreq.put(sChar, charFreq.getOrDefault(sChar, 0)   1);
    }

    int charWithOddFreq = 0;
    for(int val : charFreq.values()) {
      if((val & 1) != 0) {
          charWithOddFreq;
      }
    }

    int len = s.length();
    return (charWithOddFreq == 0) ? len : (len - charWithOddFreq   1);
  }

Using Map, the time complexity should be O(n m), as m will vary for input string as it is dependent on the input size. But, my question in this case(when using Map) is that can we say O(n m) as O(n), when n > m? because according to constraints i.e.
1 <= s.length <= 2000
and s consists of lowercase and/or uppercase English letters only.
So, length of string(n) > number of lowercase and uppercase letters(m) In short, is O(n m) ~ O(n) in this case too?

CodePudding user response:

A1. Since m is a constant (58) in your first algorithm, this is O(n) time. This O(n constant) is the same as O(n).

A2. Your second algorithm is also O(n) time.

Using Map, the time complexity should be O(n m), as m will vary for input string as it is dependent on the input size.

You haven't stated it explicitly, but in this case, n is the number of characters in s, and m is the number of distinct characters in s.

Those variable variables m and n are not independent. In fact m will always be less or equal to n than n.

But m n <= 2n, and O(2n) is the same as O(n). So O(m n) is the same as O(n) in this case.


(The above isn't a rigorous proof ... but it should be sufficient to highlight the flaw in your reasoning. If you want a rigorous proof, it is pretty straight forward, albeit tedious.)


And to answer your the question in the (corrected) title:

Can we say O(n m) is equivalent to O(n), when m is constant or when n > m

Yes. See above.


Note that the fact that both versions have the same (time) complexity doesn't mean that their performance will be equivalent.

CodePudding user response:

I fully agree with Stephen's answer, but would go one step further. Since there is an upper limit to the input size, both algorithms are O(1) - because, for all valid inputs, there is a constant number of operations that neither algorithm will exceed.

Asking whether the asymptotic worst-case time-complexity (aka Big-O) of the algorithm using arrays or maps is best makes little sense, unless the size of those maps can grow over a significant range.

I really recommend attempting to plot out the differences in time that each algorithm needs to for a variety of (valid) inputs. You should find that, for such small inputs, both perform very similarly, and there is very little growth of time depending on input size. Due to the complexity of maps, I would expect the array-based one to be faster; but again, for such small problems, the difference will not be very notable in terms of raw millisecons.

  • Related