I'm watching the coursera lectures about algorithm and the professor introduces that the naive implementation of Dijkstra's shortest path algorithm, without using heaps, takes O(nm) time(n is the number of vertices, and m is the number of edges)
It claims that the main loop will go through the rest vertices besides the source, which are n-1 vertices, this I can understand, but inside the loop, the algorithm will go through edges that with tail in the processed vertices and head in the unprocessed vertices, to minimize the next path. But why does it mean there are m edges to go through, we can just go through the edges that qualifies the criteria(tail in the processed vertices and head in the unprocessed vertices) even in a naive implementation right?
Could anyone please help me understand this? thanks.
CodePudding user response:
When you consider big-O
time complexity, you should think of it as a form of upper-bound. Meaning, if some input can possibly make your program run in O(N)
while some other input can possibly make your program run in O(NM)
, we generally say that this is an O(NM)
program (especially when dealing with algorithms). This is called considering an algorithm in its worst-case.
There are rare cases where we don't consider the worst-case (amortized time complexity analysis, time complexity with random elements to it such as quicksort). In the case of quicksort, for example, the worst-case time complexity is O(N^2)
, but the chance of that happening is so, so small that in practice, if we select a random pivot, we can expect O(N log N)
time complexity. That being said, unless you can guarantee (or have large confidence that) your program runs faster than O(NM)
, it is an O(NM)
program.
Let's consider the case of your naive implementation of Dijkstra's algorithm. You didn't specify whether you were considering directed or undirected graphs, so I'll assume that the graph is undirected (the case for a directed graph is extremely similar).
We're going through all the nodes, besides the first node. This means we already have an O(N)
loop.
In each layer of the loop, we're considering all the edges that stem from a processed node to an unprocessed node.
However, in the worst-case, there are close to O(M)
of these in each layer of the loop; not O(1)
or anything less than O(M)
.
There's an easy way to prove this. If each edge were to only be visited a single time, then we can say that the algorithm runs in O(M)
. However, in your naive implementation of Dijkstra's algorithm, the same edge can be considered multiple times. In fact, asymptomatically speaking, O(N)
times.
You can try creating a graph yourself, then dry-running the process of Dijkstra on paper. You'll notice that each edge can be considered up to N
times: once in each layer of the O(N)
loop.
In other words, the reason we can't say the program is faster than O(NM)
is because there is no guarantee that each edge isn't processed N
times, or less than O(log N)
times, or less than O(sqrt N)
times, etc. Therefore, the best upper bound we can give is N
, although in practice it may be less than N
by some sort of constant. However, we do not consider this constant in big-O
time complexity.
However, your thought process may lead to a better implementation of Dijsktra. When considering the algorithm, you may realize that instead of considering every single edge that goes from processed to unprocessed vertices in every iteration of the main loop, we only have to consider every single edge that is adjacent to the current node we are on (I'm talking about something like this or this).
By implementing it like so, you can achieve a complexity of O(N^2)
.