Home > Back-end >  Do problem constraints change the time complexity of algorithms?
Do problem constraints change the time complexity of algorithms?

Time:07-30

Let's say that the algorithm involves iterating through a string character by character.

If I know for sure that the length of the string is less than, say, 15 characters, will the time complexity be O(1) or will it remain as O(n)?

CodePudding user response:

It depends.

If your algorithm's requirements would grow if larger inputs were provided, then the algorithmic complexity can (and should) be evaluated independently of the inputs. So iterating over all the elements of a list, array, string, etc., is O(n) in relation to the length of the input.

If your algorithm is tied to the limited input size, then that fact becomes part of your algorithmic complexity. For example, maybe your algorithm only iterates over the first 15 characters of the input string, regardless of how long it is. Or maybe your business case simply indicates that a larger input would be an indication of a bug in the calling code, so you opt to immediately exit with an error whenever the input size is larger than a fixed number. In those cases, the algorithm will have constant requirements as the input length tends toward very large numbers.

From Wikipedia

Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity.
...
In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows.

In practice, almost all inputs have limits: you cannot input a number larger than what's representable by the numeric type, or a string that's larger than the available memory space. So it would be silly to say that any limits change an algorithm's asymptotic complexity. You could, in theory, use 15 as your asymptote (or "particular value"), and therefore use Big-O notation to define how an algorithm grows as the input approaches that size. There are some algorithms with such terrible complexity (or some execution environments with limited-enough resources) that this would be meaningful.

But if your argument (string length) does not tend toward a large enough value for some aspect of your algorithm's complexity to define the growth of its resource requirements, it's arguably not appropriate to use asymptotic notation at all.

CodePudding user response:

NO!

The time complexity of an algorithm is independent of program constraints. Here is (a simple) way of thinking about it:

Say your algorithm iterates over the string and appends all consonants to a list.

Now, for iteration time complexity is O(n). This means that the time taken will increase roughly in proportion to the increase in the length of the string. (Time itself though would vary depending on the time taken by the if statement and Branch Prediction)

The fact that you know that the string is between 1 and 15 characters long will not change how the program runs, it merely tells you what to expect.

For example, knowing that your values are going to be less than 100,000 you could store them in a 16-bit integer and not worry about Integer overflow.

CodePudding user response:

No

It would be O(n) if you only iterate once, i.e. one loop.

Basically O(n) means if the input has a length of n, how many operations does this algorithm do relative to n? The answer is n, If there is only one loop. this is over-smiplified.

We don't really care that the string is 15, we just care about the function of n. Is it linear? is it quadratic?

In other words, how many operations are you going to do relative to the input length n? but instead of saying I will do 15 say I will do 1 x n or O(n). Instead of saying 15x15 say I will do n^2 or O(n^2)... etc.

CodePudding user response:

In the mathematical sense, yes. Big-O notation describes the behavior of an algorithm in the limit, and if you have a fixed upper bound on the input size, that implies it has a maximum constant complexity.

That said, context is important. All computers have a realistic limit to the amount of input they can accept (a technical upper bound). Just because nothing in the world can store a yottabyte of data doesn't mean saying every algorithm is O(1) is useful! It's about applying the mathematics in a way that makes sense for the situation.

Here are two contexts for your example, one where it makes sense to call it O(1), and one where it does not.

  1. "I decided I won't put strings of length more than 15 into my program, therefore it is O(1)". This is not a super useful interpretation of the runtime. The actual time is still strongly tied to the size of the string; a string of size 1 will run much faster than one of size 15 even if there is technically a constant bound. In other words, within the constraints of your problem there is still a strong correlation to n.
  2. "My algorithm will process a list of n strings, each with maximum size 15". Here we have a different story; the runtime is dominated by having to run through the list! There's a point where n is so large that the time to process a single string doesn't change the correlation. Now it makes sense to consider the time to process a single string O(1), and therefore the time to process the whole list O(n)

That said, Big-O notation doesn't have to only use one variable! There are problems where upper bounds are intrinsic to the algorithm, but you wouldn't put a bound on the input arbitrarily. Instead, you can describe each dimension of your input as a different variable:

n = list length
s = maximum string length
=> O(n*s)

CodePudding user response:

Do problem constraints change the time complexity of algorithms?

No.

If I know for sure that the length of the string is less than, say, 15 characters ..."

We already know the length of the string is less than SIZE_MAX. Knowing an upper fixed bound for string length does not make the the time complexity O(1).

Time complexity remains O(n).

  • Related