def lcs(X,Y,i,j):
if (i == 0 or j == 0):
return 0
elif X[i-1] == Y[j-1]:
return 1 lcs(X,Y,i-1,j-1)
else:
return max(lcs(X,Y,i,j-1),lcs(X,Y,i-1,j))
def lcs(X,Y,i,j):
if c[i][j] >= 0:
return c[i][j]
if (i == 0 or j == 0):
c[i][j] = 0
elif X[i-1] == Y[j-1]:
c[i][j] = 1 lcs(X,Y,i-1,j-1)
else:
c[i][j] = max(lcs(X,Y,i,j-1),lcs(X,Y,i-1,j))
return c[i][j]
Second one works faster than first. But i found their time complexities the same as O(N2).
CodePudding user response:
The complexity of the first function is exponential. Every recursive call makes 2 new recursive calls. A lot of values are calculated more than once. This is a classic "pitfall", often encountered for instance when writing code for the Fibonacci function.
The second function is a "memoized" version of the first function. All the calculated values are stored in table c
, to avoid recalculating them. Thus, the double recursive call max(lcs(X,Y,i,j-1),lcs(X,Y,i-1,j))
doesn't blow up the number of recursive calls, because most recursive calls terminate immediately with "oh, this value has already been calculated, it's in the table".
CodePudding user response:
Time complexity is not about how fast a particular piece of code or an algorithm is,It measures the time taken to execute each statement of code in an algorithm. There can be possibility that one algorithm can run faster as compared to another but have same complexity.