Home > front end >  Why is my algorithm showing linear behavior when it's supposed to be O(m4^(m))?
Why is my algorithm showing linear behavior when it's supposed to be O(m4^(m))?

Time:01-13

I am trying to understand the complexity of an algorithm I am experimenting with. The site where I found the algorithm states that it has a complexity of O(mn4^(m n)), but when I held n constant in my experimental analysis, the results show a linear behavior, shouldn't it be something like O(m4^m). Can anyone explain why this may be happening?

This is my code:

def longestIncreasingPathDFS(matrix):
    maxlen = [0]
    for i in range(len(matrix)):
        for j in range(len(matrix[0])):
            dfs(matrix, i, j, maxlen, 1)
    return maxlen[0]

def dfs(matrix, i, j, maxlen, length):
    #keeps the longest length in max[0]
    maxlen[0] = max(maxlen[0], length)
    m = len(matrix)
    n = len(matrix[0])
    
    dx = [-1, 0, 1, 0]
    dy = [0, 1, 0, -1]

    for k in range(4):
        x = i   dx[k]
        y = j   dy[k]

        if x >= 0 and x < m and y >= 0 and y < n and matrix[x][y] > matrix[i][j]:
            dfs(matrix, x, y, maxlen, length  1)

This is how i get the linear plot

import time
import matplotlib.pyplot as plt
import random

times = []
input_sizes = range(1, 500)

for i in input_sizes:
    matrix = [[random.randint(0,100) for _ in range(i)] for _ in range(10)]
    start_time = time.time()
    longestIncreasingPathDFS(matrix)
    end_time = time.time()
    times.append(end_time - start_time)

plt.plot(input_sizes, times)
plt.xlabel("Input size")
plt.ylabel("Time (segs)")
plt.show()

I tried increasing the test sample but the plot is clearly lineal, plus i attempted to search related question's about this algorithm but with no luck

CodePudding user response:

Due to the recursion, the worst case is that you go nxm times through in average nxm/2elements, i.e. O((nxm)^4), I'd say.

However, like many algorithms, the normal case is much more forgiving/efficient than the constructed worst case.

So in most cases, it will be more like a constant times nxm, because the longest path is much shorter than the number of matrix elements.

For a random matrix maybe not even growing linear with size, but truly constant - the probability of having a continuous sequence is exponentially decreasing with its length, hence your observation.

Edit: Tip: Try a large matrix like this (instead of random), the values sorted so the path is stretching over all elements:

[[1,    2,    ... n],
 [2n,   2n-1, ... n 1],
 [2n 1, 2n 2, ... 3n],
 [....            n*m]]

I expect this to be more like (n*m)^4

Ah, and another limitation: You use random integers between 1 and 100, so the path is never longer than 100 in your cases. So the complexity is limited to O(n*m*p) where p is the largest integer you use in the random matrix.

CodePudding user response:

Proving @Dr. V's point

import time
import matplotlib.pyplot as plt
import random
import numpy as np

def path_exploit(rows, cols):
    """
    Function creates matrix with longest path of size = 2 * (rows   cols) - 2
    """
    # Init a zero matrix of size (rows, cols)
    matrix = np.zeros(shape = (rows, cols))
    # Create longest path along the matrix boundary
    bd = [(0, j) for j in range(matrix.shape[1])]   [(i, matrix.shape[1] - 1) for i in range(1, matrix.shape[0])]   [(matrix.shape[0] - 1, j) for j in range(matrix.shape[1] - 2, -1 , -1)]   [(i, 0) for i in range(matrix.shape[0] - 2, 0, -1)]
    count = 1
    for element in bd:
        matrix[element[0], element[1]] = count
        count  = 1
    return matrix.tolist()

times = []
input_sizes = range(1, 1000, 50)

for i in input_sizes:
    matrix = path_exploit(i, 10) #[[random.randint(0,100) for _ in range(i)] for _ in range(10)]
    start_time = time.time()
    longestIncreasingPathDFS(matrix)
    end_time = time.time()
    times.append(end_time - start_time)

plt.plot(input_sizes, times)
plt.xlabel("Input size")
plt.ylabel("Time (segs)")
plt.show()

Time vs # of cols now starts to look exponential

Plot

  • Related