1. Introduction
Dijkstra's algorithm, conceived by Dutch computer scientist Edsger W. Dijkstra in 1956 and published in 1959, is one of the most celebrated algorithms in computer science. It solves the single-source shortest path problem for a weighted graph where all edge weights are non-negative, producing a shortest-path tree rooted at the source vertex.
The algorithm employs a greedy strategy: at each step, it selects the unvisited vertex with the smallest known distance from the source and explores all of its outgoing edges. This greedy choice is provably optimal when all edge weights are non-negative, which is the fundamental reason Dijkstra's algorithm cannot handle negative edge weights correctly.
"What is the shortest way to travel from Rotterdam to Groningen? It is the algorithm for the shortest path, which I designed in about twenty minutes." -- Edsger W. Dijkstra
2. How It Works
Dijkstra's algorithm maintains two sets of vertices: those whose shortest distance from the source has been finalized (the "settled" set) and those that are still candidates (the "unsettled" set). It uses a priority queue to efficiently select the unsettled vertex with the minimum distance estimate.
Core Principles
- Initialize: Set the distance to the source vertex as 0 and all other distances as infinity. Insert all vertices into a priority queue keyed by their distance values.
- Extract minimum: Remove the vertex
uwith the smallest distance from the priority queue. This vertex's distance is now finalized. - Relax neighbors: For each neighbor
vofu, check if the path throughuoffers a shorter distance. Ifdist[u] + weight(u,v) < dist[v], updatedist[v]and recorduas the predecessor ofv. - Repeat: Continue extracting and relaxing until the priority queue is empty or all reachable vertices have been settled.
Critical requirement: Dijkstra's algorithm produces incorrect results if any edge weight is negative. For graphs with negative weights, use the Bellman-Ford algorithm instead.
3. Algorithm and Pseudocode
functionDijkstra(Graph, source): // Step 1: Initializefor each vertex v in Graph.vertices: dist[v] = INFINITY predecessor[v] = NULL visited[v] = false dist[source] = 0// Priority queue: (distance, vertex) PQ = MinPriorityQueue() PQ.insert((0, source)) while PQ is not empty: (d, u) = PQ.extractMin() if visited[u]: continue visited[u] = true// Step 2: Relax all neighborsfor each neighbor v of u: if not visited[v]: newDist = dist[u] + weight(u, v) if newDist < dist[v]: dist[v] = newDist predecessor[v] = u PQ.insert((newDist, v)) return dist[], predecessor[] Note: This version uses a lazy deletion approach where duplicate entries in the priority queue are skipped via the visited check. An alternative approach uses a decrease-key operation, which depends on the priority queue implementation.
4. Complexity Analysis
| Implementation | Extract-Min | Decrease-Key | Total Time |
|---|---|---|---|
| Array (linear scan) | O(V) | O(1) | O(V^2) |
| Binary heap | O(log V) | O(log V) | O((V+E) log V) |
| Fibonacci heap | O(log V) | O(1) amortized | O(V log V + E) |
| Metric | Complexity |
|---|---|
| Space complexity | O(V + E) |
For sparse graphs (E = O(V)), the binary heap implementation is preferred. For dense graphs (E = O(V^2)), the simple array implementation with O(V^2) time can actually outperform heap-based approaches due to lower constant factors.
5. Worked Example
Consider an undirected weighted graph with 6 vertices (A through F):
| Edge | Weight |
|---|---|
| A -- B | 4 |
| A -- C | 2 |
| B -- C | 1 |
| B -- D | 5 |
| C -- D | 8 |
| C -- E | 10 |
| D -- E | 2 |
| D -- F | 6 |
| E -- F | 3 |
Source vertex: A
Step-by-Step Trace
Step 1: Start with A (dist = 0)
Settled: Dijkstra's Algorithm Explained. Relax neighbors:
- A->B: dist[B] = 4
- A->C: dist[C] = 2
dist = [A:0, B:4, C:2, D:inf, E:inf, F:inf]
Step 2: Extract C (dist = 2)
Settled: {A, C}. Relax neighbors of C:
- C->B: 2+1 = 3 < 4, so dist[B] = 3
- C->D: 2+8 = 10, dist[D] = 10
- C->E: 2+10 = 12, dist[E] = 12
dist = [A:0, B:3, C:2, D:10, E:12, F:inf]
Step 3: Extract B (dist = 3)
Settled: {A, C, B}. Relax neighbors of B:
- B->D: 3+5 = 8 < 10, so dist[D] = 8
dist = [A:0, B:3, C:2, D:8, E:12, F:inf]
Step 4: Extract D (dist = 8)
Settled: {A, C, B, D}. Relax neighbors of D:
- D->E: 8+2 = 10 < 12, so dist[E] = 10
- D->F: 8+6 = 14, dist[F] = 14
dist = [A:0, B:3, C:2, D:8, E:10, F:14]
Step 5: Extract E (dist = 10)
Settled: {A, C, B, D, E}. Relax neighbors of E:
- E->F: 10+3 = 13 < 14, so dist[F] = 13
dist = [A:0, B:3, C:2, D:8, E:10, F:13]
Step 6: Extract F (dist = 13)
Settled: {A, C, B, D, E, F}. No further updates.
Final Shortest Distances from A
| Vertex | Distance | Shortest Path |
|---|---|---|
| A | 0 | A |
| B | 3 | A -> C -> B |
| C | 2 | A -> C |
| D | 8 | A -> C -> B -> D |
| E | 10 | A -> C -> B -> D -> E |
| F | 13 | A -> C -> B -> D -> E -> F |
6. Priority Queue Implementations
The choice of priority queue data structure significantly impacts the performance of Dijkstra's algorithm. The three main options each offer different trade-offs:
Array-Based (Unordered)
The simplest implementation stores distances in a plain array and scans linearly to find the minimum. This yields O(V^2) total time and is optimal for dense graphs where E approaches V^2.
Binary Heap
The most commonly used implementation in practice. Provides O(log V) extract-min and decrease-key operations, giving O((V+E) log V) overall. Most standard libraries provide binary heap implementations.
Fibonacci Heap
Offers the theoretically best bound of O(V log V + E) thanks to O(1) amortized decrease-key. However, the large constant factors and implementation complexity make it slower in practice for most graph sizes.
Practical note: For most real-world applications, a binary heap with lazy deletion (allowing duplicate entries and skipping already-visited vertices) provides the best balance of simplicity and performance.
7. Advantages and Disadvantages
Advantages
- Efficiency: Faster than Bellman-Ford for graphs without negative weights, especially with a good priority queue.
- Optimal substructure: The greedy approach guarantees optimal results for non-negative weight graphs.
- Versatility: Works on both directed and undirected graphs.
- Early termination: Can stop once the target vertex is settled, without computing all shortest paths.
- Widely implemented: Available in virtually all graph libraries and navigation software.
Disadvantages
- No negative weights: Produces incorrect results if any edge weight is negative.
- No negative cycle detection: Cannot identify negative-weight cycles.
- Memory overhead: Priority queue implementations add space overhead compared to simpler algorithms.
- Not optimal for all-pairs: Running Dijkstra from every vertex is less efficient than Floyd-Warshall for dense graphs.
8. Comparison with Other Algorithms
| Feature | Dijkstra | Bellman-Ford | A* Search |
|---|---|---|---|
| Approach | Greedy | Dynamic Programming | Greedy + Heuristic |
| Negative Weights | No | Yes | No |
| Time Complexity | O((V+E) log V) | O(V * E) | Depends on heuristic |
| Target-Oriented | Can stop early | Computes all paths | Yes, guided by heuristic |
| Optimality | Optimal | Optimal | Optimal with admissible heuristic |
9. Applications
- GPS navigation: Finding the shortest driving route between two locations in road networks.
- Network routing: Open Shortest Path First (OSPF) protocol uses Dijkstra's algorithm to compute routing tables in IP networks.
- Social networks: Computing degrees of separation and finding the closest connections between users.
- Robotics: Path planning for autonomous robots navigating through environments with varying terrain costs.
- Telecommunications: Routing signals through networks to minimize latency or cost.
- Flight scheduling: Finding cheapest or fastest flight connections between airports.
10. References
- Dijkstra, E. W. (1959). "A Note on Two Problems in Connexion with Graphs." Numerische Mathematik, 1, 269-271.
- Cormen, T. H., Leiserson, C. E., Rivest, R. L., and Stein, C. (2009). Introduction to Algorithms (3rd ed.). MIT Press. Chapter 24.
- Fredman, M. L. and Tarjan, R. E. (1987). "Fibonacci Heaps and Their Uses in Improved Network Optimization Algorithms." Journal of the ACM, 34(3), 596-615.
- Sedgewick, R. and Wayne, K. (2011). Algorithms (4th ed.). Addison-Wesley. Section 4.4.
- Skiena, S. S. (2008). The Algorithm Design Manual (2nd ed.). Springer. Chapter 6.