Home > Mobile >  Time Complexity of Uniform Cost Search (Including Min Priority Queue)
Time Complexity of Uniform Cost Search (Including Min Priority Queue)

Time:10-22

In most text books, the asymptotic upper bound for the worst-case running time of UCS is defined as O(b(1 C / ε)). The details are explained here: Time complexity of Uniform-cost search.

O(b(1 C / ε)) reflects the upper bound for the total number of states that must be explored by UCS before finding a specific target state. Usually all these states are stored and maintained in a min-priority queue (the frontier).

I am wondering why the overhead for maintaining the min-priority queue is not explicitly taken into account when defining the time complexity of UCS. Let‘s define n = b(1 C / ε). Then shouldn‘t be the running time O(nlgn)?

Why is the lgn not explicitly included? Is it because we can ignore it when focussing on the asymptotic behavior?

CodePudding user response:

The 'problem' here isn't really about the algorithm, but that different subfields of CS use different names for things (and sometimes the same name for different things). This is entirely a case of different definitions.

'Uniform cost search' is another name for a variant of Dijkstra's algorithm, usually used in the context of AI where the underlying graph may be infinite. As you mentioned, the link you provide and the AI textbook from that link calculate that O(b^(1 C / ε)) nodes are explored by UCS, which is correct. The number of elementary operations taken by the algorithm on a computer (which is the usual measure used in computational complexity theory) would, however, include a logarithmic factor to handle the priority queue operations. If you're measuring running time by elementary operations, the log n factor can absolutely not be ignored in asymptotics.

Since the textbook you're quoting is an AI textbook, priority queues are only mentioned in passing, as the focus isn't on the various implementations of data structures and their runtimes. They specify that they're only counting the number of states explored when comparing the time complexity of algorithms, because that's the focus of the textbook. A treatment in an 'Introduction to algorithms' textbook would, on the other hand, talk about implementations and Fibonacci heaps and use a different measure of time complexity.

  • Related