Home > OS >  Big O complexity and unpacking from a list
Big O complexity and unpacking from a list

Time:09-20

I'm struggling to understand Big O complexity. Is it correct that o_one() is "better" than o_n()?

def o_one(n: int):
    # O(1) ?
    print(n 1)


o_one(1)
o_one(2)
o_one(3)
def o_n(lst: list):
    # O(n) ?
    while lst:
        print(lst.pop() 1)


o_n([3, 2, 1])

If I'm correct about the time complexities, then I must be misunderstanding something fundamental. Surely they are both equally efficient? First function needs to repeat 3 times to do the same thing?

edit: I'll provide another example then, since this one was problematic.

If I take my Linked List append method, which is O(1)...

    def add_beginning(self, new):
        # O(1)
        new_node = Node(new)
        new_node.next = self.head
        self.head = new_node

... and add the ability for it to take a list of elements and append from that:

    def add_beginning(self, new):
        # O(n)
        if type(new) != list:
            new = [new]
        while new:
            new_node = Node(new.pop())
            new_node.next = self.head
            self.head = new_node

Did I make my function worse? I did change the time complexity from O(1) to O(n)

CodePudding user response:

O(1): retrieve one item from an array in memory(consecutive memory), like

def retrieve(n):
    return array[n]

O(n): check whether certain item is in the array

def is_in_array(n)
    for item in array:
        if item == n:
           return True
    return False

I think you can say function retrieve is of O(1), and function is_in_array is of O(N)

CodePudding user response:

Big O notation doesn't make a lot of sense for your example, you would typically use it to describe the efficiency of two (or more) algorithms that do the same thing differently. Take search, for example.

Lets say you have a sorted list of N numbers:

[1, 4, 7, 9, 13, 14, 17, 134, 167, 200, 201]

Lets say we want to find the position where 167 is in this list. We can do this in multiple ways:

The trivial approach would be to start at the beginning and go to the next element in the list until we find the one we are looking for. In the worst case (our element is the last one), this will take N steps. Therefore, we have O(N) complexity. No matter how long our list is, we can always be sure we will find the element in N or less steps.

Now let's look at a different approach: Binary search. We start by looking at the element in the middle, which is 14. Since the list is sorted, we know our element can not be found in the 5 numbers that are before the 14, so we can throw them away. Now we are looking at the middle element of the remaining list [17, 134, 167, 200, 201]. The element is 167 and we are done. This algorithm allows to find our element much quicker! Since we halve the numbers to be searched in each step, our complexity is O(log N)! This means we will never need more than log N steps to find the result, sometimes we will even be quicker (like in our example).

  • Related