1. Introduction

Dijkstra's algorithm, conceived by Dutch computer scientist Edsger W. Dijkstra in 1956 and published in 1959, is one of the most celebrated algorithms in computer science. It solves the single-source shortest path problem for a weighted graph where all edge weights are non-negative, producing a shortest-path tree rooted at the source vertex.

The algorithm employs a greedy strategy: at each step, it selects the unvisited vertex with the smallest known distance from the source and explores all of its outgoing edges. This greedy choice is provably optimal when all edge weights are non-negative, which is the fundamental reason Dijkstra's algorithm cannot handle negative edge weights correctly.

"What is the shortest way to travel from Rotterdam to Groningen? It is the algorithm for the shortest path, which I designed in about twenty minutes." -- Edsger W. Dijkstra

2. How It Works

Dijkstra's algorithm maintains two sets of vertices: those whose shortest distance from the source has been finalized (the "settled" set) and those that are still candidates (the "unsettled" set). It uses a priority queue to efficiently select the unsettled vertex with the minimum distance estimate.

Core Principles

  1. Initialize: Set the distance to the source vertex as 0 and all other distances as infinity. Insert all vertices into a priority queue keyed by their distance values.
  2. Extract minimum: Remove the vertex u with the smallest distance from the priority queue. This vertex's distance is now finalized.
  3. Relax neighbors: For each neighbor v of u, check if the path through u offers a shorter distance. If dist[u] + weight(u,v) < dist[v], update dist[v] and record u as the predecessor of v.
  4. Repeat: Continue extracting and relaxing until the priority queue is empty or all reachable vertices have been settled.

Critical requirement: Dijkstra's algorithm produces incorrect results if any edge weight is negative. For graphs with negative weights, use the Bellman-Ford algorithm instead.

3. Algorithm and Pseudocode

functionDijkstra(Graph, source): // Step 1: Initializefor each vertex v in Graph.vertices: dist[v] = INFINITY predecessor[v] = NULL visited[v] = false dist[source] = 0// Priority queue: (distance, vertex) PQ = MinPriorityQueue() PQ.insert((0, source)) while PQ is not empty: (d, u) = PQ.extractMin() if visited[u]: continue visited[u] = true// Step 2: Relax all neighborsfor each neighbor v of u: if not visited[v]: newDist = dist[u] + weight(u, v) if newDist < dist[v]: dist[v] = newDist predecessor[v] = u PQ.insert((newDist, v)) return dist[], predecessor[]

Note: This version uses a lazy deletion approach where duplicate entries in the priority queue are skipped via the visited check. An alternative approach uses a decrease-key operation, which depends on the priority queue implementation.

4. Complexity Analysis

ImplementationExtract-MinDecrease-KeyTotal Time
Array (linear scan)O(V)O(1)O(V^2)
Binary heapO(log V)O(log V)O((V+E) log V)
Fibonacci heapO(log V)O(1) amortizedO(V log V + E)
MetricComplexity
Space complexityO(V + E)

For sparse graphs (E = O(V)), the binary heap implementation is preferred. For dense graphs (E = O(V^2)), the simple array implementation with O(V^2) time can actually outperform heap-based approaches due to lower constant factors.

5. Worked Example

Consider an undirected weighted graph with 6 vertices (A through F):

EdgeWeight
A -- B4
A -- C2
B -- C1
B -- D5
C -- D8
C -- E10
D -- E2
D -- F6
E -- F3

Source vertex: A

Step-by-Step Trace

Step 1: Start with A (dist = 0)

Settled: Dijkstra's Algorithm Explained. Relax neighbors:

  • A->B: dist[B] = 4
  • A->C: dist[C] = 2

dist = [A:0, B:4, C:2, D:inf, E:inf, F:inf]

Step 2: Extract C (dist = 2)

Settled: {A, C}. Relax neighbors of C:

  • C->B: 2+1 = 3 < 4, so dist[B] = 3
  • C->D: 2+8 = 10, dist[D] = 10
  • C->E: 2+10 = 12, dist[E] = 12

dist = [A:0, B:3, C:2, D:10, E:12, F:inf]

Step 3: Extract B (dist = 3)

Settled: {A, C, B}. Relax neighbors of B:

  • B->D: 3+5 = 8 < 10, so dist[D] = 8

dist = [A:0, B:3, C:2, D:8, E:12, F:inf]

Step 4: Extract D (dist = 8)

Settled: {A, C, B, D}. Relax neighbors of D:

  • D->E: 8+2 = 10 < 12, so dist[E] = 10
  • D->F: 8+6 = 14, dist[F] = 14

dist = [A:0, B:3, C:2, D:8, E:10, F:14]

Step 5: Extract E (dist = 10)

Settled: {A, C, B, D, E}. Relax neighbors of E:

  • E->F: 10+3 = 13 < 14, so dist[F] = 13

dist = [A:0, B:3, C:2, D:8, E:10, F:13]

Step 6: Extract F (dist = 13)

Settled: {A, C, B, D, E, F}. No further updates.

Final Shortest Distances from A

VertexDistanceShortest Path
A0A
B3A -> C -> B
C2A -> C
D8A -> C -> B -> D
E10A -> C -> B -> D -> E
F13A -> C -> B -> D -> E -> F

6. Priority Queue Implementations

The choice of priority queue data structure significantly impacts the performance of Dijkstra's algorithm. The three main options each offer different trade-offs:

Array-Based (Unordered)

The simplest implementation stores distances in a plain array and scans linearly to find the minimum. This yields O(V^2) total time and is optimal for dense graphs where E approaches V^2.

Binary Heap

The most commonly used implementation in practice. Provides O(log V) extract-min and decrease-key operations, giving O((V+E) log V) overall. Most standard libraries provide binary heap implementations.

Fibonacci Heap

Offers the theoretically best bound of O(V log V + E) thanks to O(1) amortized decrease-key. However, the large constant factors and implementation complexity make it slower in practice for most graph sizes.

Practical note: For most real-world applications, a binary heap with lazy deletion (allowing duplicate entries and skipping already-visited vertices) provides the best balance of simplicity and performance.

7. Advantages and Disadvantages

Advantages

  • Efficiency: Faster than Bellman-Ford for graphs without negative weights, especially with a good priority queue.
  • Optimal substructure: The greedy approach guarantees optimal results for non-negative weight graphs.
  • Versatility: Works on both directed and undirected graphs.
  • Early termination: Can stop once the target vertex is settled, without computing all shortest paths.
  • Widely implemented: Available in virtually all graph libraries and navigation software.

Disadvantages

  • No negative weights: Produces incorrect results if any edge weight is negative.
  • No negative cycle detection: Cannot identify negative-weight cycles.
  • Memory overhead: Priority queue implementations add space overhead compared to simpler algorithms.
  • Not optimal for all-pairs: Running Dijkstra from every vertex is less efficient than Floyd-Warshall for dense graphs.

8. Comparison with Other Algorithms

FeatureDijkstraBellman-FordA* Search
ApproachGreedyDynamic ProgrammingGreedy + Heuristic
Negative WeightsNoYesNo
Time ComplexityO((V+E) log V)O(V * E)Depends on heuristic
Target-OrientedCan stop earlyComputes all pathsYes, guided by heuristic
OptimalityOptimalOptimalOptimal with admissible heuristic

9. Applications

  • GPS navigation: Finding the shortest driving route between two locations in road networks.
  • Network routing: Open Shortest Path First (OSPF) protocol uses Dijkstra's algorithm to compute routing tables in IP networks.
  • Social networks: Computing degrees of separation and finding the closest connections between users.
  • Robotics: Path planning for autonomous robots navigating through environments with varying terrain costs.
  • Telecommunications: Routing signals through networks to minimize latency or cost.
  • Flight scheduling: Finding cheapest or fastest flight connections between airports.

10. References

  1. Dijkstra, E. W. (1959). "A Note on Two Problems in Connexion with Graphs." Numerische Mathematik, 1, 269-271.
  2. Cormen, T. H., Leiserson, C. E., Rivest, R. L., and Stein, C. (2009). Introduction to Algorithms (3rd ed.). MIT Press. Chapter 24.
  3. Fredman, M. L. and Tarjan, R. E. (1987). "Fibonacci Heaps and Their Uses in Improved Network Optimization Algorithms." Journal of the ACM, 34(3), 596-615.
  4. Sedgewick, R. and Wayne, K. (2011). Algorithms (4th ed.). Addison-Wesley. Section 4.4.
  5. Skiena, S. S. (2008). The Algorithm Design Manual (2nd ed.). Springer. Chapter 6.