1. Introduction
The Coin Change problem is one of the most classic examples in dynamic programming. Given a set of coin denominations and a target amount, the goal is to find the minimum number of coins needed to make that amount. This problem appears frequently in technical interviews and serves as an excellent introduction to DP thinking because the greedy approach -- which seems intuitive -- does not always produce the correct answer.
"The Coin Change problem beautifully illustrates why dynamic programming exists: local optimal choices do not always lead to a global optimum." -- Richard Bellman
Unlike many problems where a greedy strategy works (such as making change with standard US denominations), the Coin Change problem in its general form requires examining overlapping subproblems, making dynamic programming the ideal technique.
2. Problem Definition
Input: An array of coin denominations coins = [c1, c2, ..., ck] and a target amount amount.
Output: The minimum number of coins needed to make amount. If it is not possible to make the amount with the given denominations, return -1.
Constraints: Each coin denomination can be used an unlimited number of times. All coin values and the target amount are positive integers.
| Parameter | Description | Example |
|---|---|---|
coins | Array of available denominations | [1, 5, 10, 25] |
amount | Target sum to achieve | 30 |
| Return value | Minimum coins needed, or -1 if impossible | 2 (25 + 5) |
3. Why Greedy Fails
A greedy approach would always pick the largest coin that does not exceed the remaining amount. While this works for certain coin systems (like standard US currency: 1, 5, 10, 25), it fails for arbitrary denominations.
Greedy Failure Example
Consider coins = [1, 3, 4] and amount = 6.
- Greedy approach: Pick 4, then 1, then 1. Result: 3 coins (4 + 1 + 1).
- Optimal answer: Pick 3 and 3. Result: 2 coins (3 + 3).
Key Insight: The greedy algorithm fails because choosing the locally largest coin can prevent us from finding a combination that uses fewer coins overall. This is precisely the scenario where dynamic programming excels.
| Strategy | Coins Used | Count | Optimal? |
|---|---|---|---|
| Greedy (largest first) | 4 + 1 + 1 | 3 | No |
| Dynamic Programming | 3 + 3 | 2 | Yes |
4. Brute Force Approach
The brute force solution tries every possible combination of coins recursively. For each coin, we either include it (and reduce the amount) or skip it. We explore all branches and return the minimum.
functioncoinChange(coins, amount): if amount == 0: return0if amount < 0: return infinity minCoins = infinity for coin in coins: result = coinChange(coins, amount - coin) if result != infinity: minCoins = min(minCoins, result + 1) return minCoins This approach has exponential time complexity because it recomputes the same subproblems many times. For example, when computing coinChange(coins, 10), the subproblem coinChange(coins, 5) may be evaluated dozens of times through different paths.
5. Optimal Substructure
The Coin Change problem exhibits optimal substructure: an optimal solution to the problem contains optimal solutions to its subproblems. If the optimal way to make amount A uses coin c as the last coin, then the remaining coins must form an optimal solution for amount A - c.
It also exhibits overlapping subproblems: different choices of coins lead to the same remaining amount, so the same subproblem is solved multiple times in the naive recursion. These two properties together make the problem ideal for dynamic programming.
6. Recurrence Relation
Let dp[i] represent the minimum number of coins needed to make amount i.
Recurrence:dp[i] = min(dp[i - c] + 1) for all coins c where c <= i
Base case:dp[0] = 0 (zero coins needed for amount zero)
Initialization:dp[i] = infinity for all i > 0
For each amount i, we try every coin denomination c. If c <= i and dp[i - c] is not infinity (meaning i - c is achievable), then dp[i] can be updated to dp[i - c] + 1 if that value is smaller than the current dp[i].
7. Top-Down (Memoization)
The top-down approach uses recursion with a memo table to avoid recomputing subproblems.
functioncoinChangeMemo(coins, amount, memo): if amount == 0: return0if amount < 0: return infinity if memo[amount] != -1: return memo[amount] minCoins = infinity for coin in coins: result = coinChangeMemo(coins, amount - coin, memo) minCoins = min(minCoins, result + 1) memo[amount] = minCoins return minCoinsThe memo array stores the result for each amount so that each subproblem is solved only once. This reduces the time complexity from exponential to polynomial.
8. Bottom-Up (Tabulation)
The bottom-up approach builds the solution iteratively from the smallest subproblem (amount 0) up to the target amount.
functioncoinChangeDP(coins, amount): dp = array of size (amount + 1), filled with infinity dp[0] = 0for i = 1to amount: for coin in coins: if coin <= i and dp[i - coin] != infinity: dp[i] = min(dp[i], dp[i - coin] + 1) if dp[amount] == infinity: return-1return dp[amount]This iterative approach avoids recursion overhead and stack overflow risks for large inputs.
9. Worked Example
Let us trace through the algorithm with coins = [1, 3, 4] and amount = 6.
DP Table Construction
We initialize dp[0] = 0 and all other entries to infinity. Then we fill each cell by checking all coin denominations.
| Amount (i) | Try coin 1 | Try coin 3 | Try coin 4 | dp[i] |
|---|---|---|---|---|
| 0 | -- | -- | -- | 0 (base) |
| 1 | dp[0]+1 = 1 | -- | -- | 1 |
| 2 | dp[1]+1 = 2 | -- | -- | 2 |
| 3 | dp[2]+1 = 3 | dp[0]+1 = 1 | -- | 1 |
| 4 | dp[3]+1 = 2 | dp[1]+1 = 2 | dp[0]+1 = 1 | 1 |
| 5 | dp[4]+1 = 2 | dp[2]+1 = 3 | dp[1]+1 = 2 | 2 |
| 6 | dp[5]+1 = 3 | dp[3]+1 = 2 | dp[2]+1 = 3 | 2 |
The answer is dp[6] = 2, achieved by using two coins of denomination 3 (3 + 3 = 6). Notice how the DP approach correctly finds this solution, while the greedy approach would have picked 4 first and needed 3 coins.
Reconstructing the Solution
To find which coins were used, we can backtrack from dp[6]. Since dp[6] = dp[3] + 1 (using coin 3), and dp[3] = dp[0] + 1 (using coin 3 again), the solution is coins [3, 3].
10. Complexity Analysis
| Approach | Time Complexity | Space Complexity |
|---|---|---|
| Brute Force (Recursion) | O(k^n) where k = number of coins, n = amount | O(n) recursion stack |
| Top-Down (Memoization) | O(n * k) | O(n) memo table + recursion stack |
| Bottom-Up (Tabulation) | O(n * k) | O(n) DP table |
Both DP approaches run in O(n * k) time where n is the target amount and k is the number of coin denominations. The bottom-up approach is generally preferred because it avoids recursion overhead.
11. Variations
Counting All Ways
Instead of finding the minimum number of coins, count the total number of distinct ways to make the amount. The recurrence changes to: dp[i] = sum(dp[i - c]) for all valid coins c.
Limited Supply of Coins
If each coin can only be used a limited number of times, the problem becomes a variant of the bounded knapsack problem. The inner loop must track how many times each denomination has been used.
Coin Change with Fewest Types
Minimize the number of distinct denominations used rather than the total number of coins. This variation requires tracking which denominations are selected.
12. Applications
- Vending machines: Calculating optimal change to return to customers using available coin inventory.
- Currency systems: Designing denomination sets that allow efficient change-making (canonical coin systems).
- Resource allocation: Distributing fixed-size resources to meet a target capacity with minimum waste.
- Network packet sizing: Choosing packet sizes from available MTU options to transmit data with minimum overhead.
- Interview preparation: The Coin Change problem is one of the most frequently asked DP questions in software engineering interviews.
13. References
- Cormen, T. H., Leiserson, C. E., Rivest, R. L., and Stein, C. Introduction to Algorithms (4th ed.). MIT Press, 2022. Chapter 14: Dynamic Programming.
- Kleinberg, J. and Tardos, E. Algorithm Design. Pearson, 2005. Chapter 6: Dynamic Programming.
- Sedgewick, R. and Wayne, K. Algorithms (4th ed.). Addison-Wesley, 2011.
- Bellman, R. Dynamic Programming. Princeton University Press, 1957.