Home > Software engineering >  Big O notation of "addition"
Big O notation of "addition"

Time:04-13

I'm learning a course about big O notation on Coursera. I watched a video about the big O of a Fibonacci algorithm (non-recursion method), which is like this:

Operation                     Runtime
create an array F[0..n]        O(n)
F[0] <-- 0                     O(1)
F[1] <-- 1                     O(1)
for i from 2 to n:             Loop O(n) times
  F[i] <-- F[i-1]   F[i-2]     O(n) => I don't understand this line, isn't it O(1)?
return F[n]                    O(1)
Total: O(n) O(1) O(1) O(n)*O(n) O(1) = O(n^2)

I understand every part except F[i] <-- F[i-1] F[i-2] O(n) => I don't understand this line, isn't it O(1) since it's just a simple addition? Is it the same with F[i] <-- 1 1?

The explanation they give me is:"But the addition is a bit worse. And normally additions are constant time. But these are large numbers. Remember, the nth Fibonacci number has about n over 5 digits to it, they're very big, and they often won't fit in the machine word."

"Now if you think about what happens if you add two very big numbers together, how long does that take? Well, you sort of add the tens digit and you carry, and you add the hundreds digit and you carry, and add the thousands digit, you carry and so on and so forth. And you sort of have to do work for each digits place. And so the amount of work that you do should be proportional to the number of digits. And in this case, the number of digits is proportional to n, so this should take O(n) time to run that line of code".

I'm still a bit confusing. Does it mean a large number affects time complexity too? For example a = n 1 is O(1) while a = n^50 n^50 isn't O(1) anymore?

Video link for anyone who needed more information (4:56 to 6:26)

CodePudding user response:

Big-O is just a notation for keeping track of orders of magnitude. But when we apply that in algorithms, we have to remember "orders of magnitude of WHAT"? In this case it is "time spent".

CPUs are set up to execute basic arithmetic on basic arithmetic types in constant time. For most purposes, we can assume we are dealing with those basic types.

However if n is a very large positive integer, we can't assume that. A very large integer will need O(log(n)) bits to represent. Which, whether we store it as bits, bytes, etc, will need an array of O(log(n)) things to store. (We would need fewer bytes than bits, but that is just a constant factor.) And when we do a calculation, we have to think about what we will actually do with that array.

Now suppose that we're trying to calculate n m. We're going to need to generate a result of size O(log(n m)), which must take at least that time to allocate. Luckily the grade school method of long addition where you add digits and keep track of carrying, can be adapted for big integer libraries and is O(log(n m)) to track.

So when you're looking at addition, the log of the size of the answer is what matters. Since log(50^n) = n * log(50) that means that operations with 50^n are at least O(n). (Getting 50^n might take longer...) And it means that calculating n 1 takes time O(log(n)).

Now in the case of the Fibonacci sequence, F(n) is roughly φ^n where φ = (1 sqrt(5))/2 so log(F(n)) = O(n).

  • Related