Home > Software engineering >  time & space complexity of list versus array operations
time & space complexity of list versus array operations

Time:08-26

I'm trying to wrap my head around the space & time complexity of algorithms.

I have two examples of different ways to modify each individual element in an array. I'm using Python and I'm curious if there's a difference between the complexity of these two operations...

First, I initialize a list in Python, iterate over the list, and append the sum of 1 i for each element in the list to a new list that's already been initialized.

# initialize f 
f = [1.2, 2.5, 2.7, 2.8, 3.9, 4.2] 

# initialize new list new_f
new_f = []

# loop through f and add each modified element to the new list 
for i in f:
  new_f.append(1 i)

print(f"new_f = {new_f}")

The second method involves creating a numpy array we'll call na and then simply adding 1 to each element as so:

# import scientific computing package NumPy
import numpy as np 

# create new array "na"
na = np.array([1,2,3,4,5])

# add 1 to each element in na
na = na   1

print(na)

I don't really believe I am completely understanding Big-O notation at this point, but it seems to me that both methods are of time complexity O(n). However, the first method has space complexity O(n m) (or, maybe O(2n)?) because a new list is being created, while the numpy method has space complexity of only O(n) because the original np array will be destroyed.

Can anyone help me out with either verifying my logic is correct, or clarifying? Thank you in advance!!

CodePudding user response:

So first off, na = na 1 requires additional space too; you're creating a new array with the incremented values and replacing the original, and while you're creating the new one, the original still exists, so your peak memory usage doubles just like in the non-numpy case. To avoid that cost, you'd have to write na = 1, which would delegate to the in-place addition code.

The other correction to note is that, if you include the space originally occupied, all versions of this code have O(n) space complexity. Constant factors are ignored in big-O notation, so even if you make a copy and double your memory usage, the space required is still O(n). All that means is that it's proportionate to n; it could actually be 1 GB per element and it would still be O(n) just as much as if each element consumed 1 byte; if you happen to double the memory used but it's still strictly proportional to n, it remains O(n). If you talk solely about the additional space required, in-place solutions could be described as O(1), while not-in-place solutions would be O(n), that's all.

  • Related