When Greedy Works: The Exchange Argument
Greedy algorithms make the locally optimal choice at each step. They are provably correct when you can show that swapping any suboptimal choice with the greedy choice never makes the solution worse. This is the exchange argument proof technique.
Key questions to ask:
- Does the greedy choice leave the remaining subproblem in the best possible state?
- Can I prove that any deviation from the greedy choice is either equal or worse?
If subproblems overlap and the optimal substructure depends on prior choices, greedy fails and you need dynamic programming.
Activity Selection / Interval Scheduling
Problem: given a list of intervals, select the maximum number of non-overlapping intervals.
Greedy insight: always pick the interval that finishes earliest. This leaves the most room for future intervals.
def activity_selection(intervals):
# Sort by finish time
intervals.sort(key=lambda x: x[1])
selected = [intervals[0]]
last_end = intervals[0][1]
for start, end in intervals[1:]:
if start >= last_end:
selected.append((start, end))
last_end = end
return selected
# Time: O(n log n) for sort, O(n) for scan
# Exchange argument: replacing earliest-finish with any later-finish interval
# can only block more future intervals, never fewer.
Jump Game I and II (LC 55, LC 45)
Jump Game I: can you reach the last index? Track the furthest index reachable from any position visited so far.
def can_jump(nums):
furthest = 0
for i, jump in enumerate(nums):
if i > furthest:
return False
furthest = max(furthest, i + jump)
return True
# Time: O(n), Space: O(1)
Jump Game II: minimum number of jumps to reach the last index.
def jump(nums):
jumps = 0
current_boundary = 0 # furthest we can reach with 'jumps' jumps
next_boundary = 0 # furthest we can reach with 'jumps+1' jumps
for i in range(len(nums) - 1):
next_boundary = max(next_boundary, i + nums[i])
if i == current_boundary:
jumps += 1
current_boundary = next_boundary
return jumps
# Time: O(n), Space: O(1)
# Greedy: always extend the current jump as far as possible before committing.
Gas Station (LC 134)
Problem: circular route of N gas stations, can you complete the circuit? If total gas >= total cost, a solution always exists. Start from the position where the running sum last went negative.
def can_complete_circuit(gas, cost):
total_surplus = 0
current_surplus = 0
start = 0
for i in range(len(gas)):
diff = gas[i] - cost[i]
total_surplus += diff
current_surplus += diff
if current_surplus = 0 else -1
# Time: O(n), Space: O(1)
# Proof: if we fail at station i, no station between start and i can be
# a valid starting point (they all have less surplus to begin with).
Candy Distribution (LC 135)
Problem: children in a line, each with a rating. Give each child at least 1 candy. Children with higher ratings than neighbors get more candies than those neighbors. Minimize total candies.
def candy(ratings):
n = len(ratings)
candies = [1] * n
# Left-to-right pass: handle ascending slopes
for i in range(1, n):
if ratings[i] > ratings[i-1]:
candies[i] = candies[i-1] + 1
# Right-to-left pass: handle descending slopes
for i in range(n-2, -1, -1):
if ratings[i] > ratings[i+1]:
candies[i] = max(candies[i], candies[i+1] + 1)
return sum(candies)
# Time: O(n), Space: O(n)
# Two separate greedy passes, each provably correct in isolation.
Task Scheduler (LC 621)
Problem: schedule CPU tasks with a cooldown period n between same-task repetitions. Find minimum intervals needed.
Greedy insight: the most frequent task dominates. Build “frames” of size (n+1) around the most frequent task. If other tasks fill frames, no idle time needed.
from collections import Counter
def least_interval(tasks, n):
counts = Counter(tasks)
max_count = max(counts.values())
# How many tasks share the maximum frequency
max_count_tasks = sum(1 for c in counts.values() if c == max_count)
# Formula: either tasks fill time naturally, or we need idle slots
return max(len(tasks), (max_count - 1) * (n + 1) + max_count_tasks)
# Time: O(n), Space: O(1) since at most 26 unique tasks
Meeting Rooms II (LC 253)
Problem: minimum number of conference rooms needed to schedule all meetings.
import heapq
def min_meeting_rooms(intervals):
if not intervals:
return 0
intervals.sort(key=lambda x: x[0])
heap = [] # min-heap of end times (one entry per room in use)
for start, end in intervals:
if heap and heap[0] <= start:
heapq.heapreplace(heap, end) # reuse earliest-ending room
else:
heapq.heappush(heap, end) # need a new room
return len(heap)
# Time: O(n log n), Space: O(n)
Partition Labels (LC 763)
Problem: partition a string so that each letter appears in at most one part. Maximize the number of parts.
def partition_labels(s):
last = {c: i for i, c in enumerate(s)} # last occurrence of each char
result = []
start = end = 0
for i, c in enumerate(s):
end = max(end, last[c]) # extend partition to include last occurrence
if i == end:
result.append(end - start + 1)
start = i + 1
return result
# Time: O(n), Space: O(1) - at most 26 chars in last dict
When Greedy Fails: Coin Change Counterexample
Greedy (always pick the largest coin) fails with arbitrary denominations. Example: coins = [1, 3, 4], amount = 6.
- Greedy: 4 + 1 + 1 = 3 coins
- Optimal: 3 + 3 = 2 coins
The greedy choice (pick 4) leads to a suboptimal global result. Subproblems overlap and require DP. Greedy works for standard coin systems (US denominations) because they are canonical – a mathematical property most arbitrary coin sets do not have.
Decision Guide: Greedy vs DP
| Question | Answer | Algorithm |
|---|---|---|
| Do locally optimal choices lead to a globally optimal solution? | Yes – provable by exchange argument | Greedy – O(n log n) or O(n) |
| Do subproblems overlap and depend on prior decisions? | Yes | Dynamic Programming |
| Does the problem ask for a count of ways or maximum/minimum over all possibilities? | Yes | Likely DP |
| Does the greedy choice irrevocably narrow the solution space in a provably safe way? | Yes | Greedy |
Practical rule: try greedy first (simpler, faster). If you can construct a counterexample where the greedy choice leads to a suboptimal result, switch to DP. The exchange argument is your proof tool – if you can write it down, greedy is correct.
{“@context”:”https://schema.org”,”@type”:”FAQPage”,”mainEntity”:[{“@type”:”Question”,”name”:”How do you know if a greedy algorithm is correct?”,”acceptedAnswer”:{“@type”:”Answer”,”text”:”Use the exchange argument: assume the greedy solution is not optimal. Take an optimal solution that differs from the greedy solution at the first divergence point. Show that you can swap the optimal choice for the greedy choice without making the solution worse. Repeat until the optimal solution matches the greedy solution. This proves the greedy solution is at least as good as any optimal solution. Alternatively, prove the greedy choice has the "greedy choice property" (making the locally optimal choice at each step leads to a global optimum) and "optimal substructure" (the remaining subproblem after a greedy choice is also optimally solvable). For interviews, describing the exchange argument idea is usually sufficient.”}},{“@type”:”Question”,”name”:”How do you solve the Jump Game problems (LC 45 and LC 55)?”,”acceptedAnswer”:{“@type”:”Answer”,”text”:”LC 55 (can reach end?): track the furthest reachable index. For each i in range(n): if i > furthest_reach, return False (gap). Update furthest_reach = max(furthest_reach, i + nums[i]). Return True if furthest_reach >= n-1. O(n) time, O(1) space. LC 45 (minimum jumps): use two boundaries – curr_end (current jump boundary) and next_end (furthest reachable in next jump). For each i, update next_end = max(next_end, i + nums[i]). When i == curr_end, increment jump count and set curr_end = next_end. Both problems use the same greedy insight: from any reachable position, take the action that maximizes future reach. Do not use DP – greedy is O(n) vs DP O(n^2).”}},{“@type”:”Question”,”name”:”How does the Gas Station greedy solution work (LC 134)?”,”acceptedAnswer”:{“@type”:”Answer”,”text”:”Key insight: if total_gas >= total_cost, a solution always exists. Find the starting station: track running_tank as you go around the circuit. When running_tank goes negative, the current starting station cannot work – reset running_tank to 0 and try the next station as the start. The valid starting station is the one after the last reset. Why this works: if we fail starting from any station in [0, k], the accumulated deficit means none of them can be the start. The only candidate is k+1 onward. Since a solution exists (total_gas >= total_cost), the station after all failures is the answer. O(n) time, O(1) space.”}},{“@type”:”Question”,”name”:”When should you use greedy vs dynamic programming?”,”acceptedAnswer”:{“@type”:”Answer”,”text”:”Use greedy when: (1) The problem has the greedy choice property – the locally optimal choice at each step leads to a globally optimal solution. (2) You can prove correctness with an exchange argument. (3) The problem involves "earliest deadline first," "maximum coverage," or "minimum number of X" patterns. Use DP when: (1) Subproblems overlap (recomputing the same subproblem from different states). (2) The greedy choice at each step is not clear or provably optimal. (3) You need to try multiple choices at each step. Greedy is faster (usually O(n log n) vs O(n^2) or O(n^3)) but harder to prove correct. When in doubt, try a small counterexample – if greedy fails, switch to DP.”}},{“@type”:”Question”,”name”:”How do you solve the Candy problem (LC 135) with greedy?”,”acceptedAnswer”:{“@type”:”Answer”,”text”:”Children with higher ratings must get more candies than their neighbors. Two-pass greedy: (1) Left to right: initialize all candies to 1. For each i from 1 to n-1: if ratings[i] > ratings[i-1], set candies[i] = candies[i-1] + 1. (2) Right to left: for each i from n-2 to 0: if ratings[i] > ratings[i+1], set candies[i] = max(candies[i], candies[i+1] + 1). The max in step 2 preserves the left-to-right constraint. Sum all candies for the answer. Why two passes? One pass cannot simultaneously satisfy both left and right neighbors. The left pass satisfies left neighbors, the right pass satisfies right neighbors, and taking the max at each step satisfies both. O(n) time, O(n) space.”}}]}
Meta coding interviews test greedy algorithms. See common patterns for Meta interview: greedy algorithm problems.
Uber algorithm interviews include greedy scheduling and optimization. See patterns for Uber interview: greedy optimization and scheduling problems.
Airbnb coding rounds include greedy optimization problems. See patterns for Airbnb interview: greedy algorithms and optimization.