Advanced Greedy Interview Patterns: Interval Scheduling, Jump Game, and Huffman Coding (2025)

When Does Greedy Work?

Greedy works when the locally optimal choice at each step leads to a globally optimal solution. Formal condition: the problem has the “greedy choice property” — making the locally optimal choice never eliminates a globally optimal solution. Proof technique: exchange argument. Assume the greedy solution is not optimal. Show that any optimal solution can be transformed, step by step, into the greedy solution without worsening the result. This contradiction proves the greedy is optimal. Greedy vs DP: if the greedy choice property holds, greedy is faster and simpler than DP. If it doesn’t hold (the locally optimal choice can prevent reaching the global optimum), use DP. Litmus test: can you prove that the greedy choice is always safe? If you can’t, it might be DP.

Pattern 1: Interval Scheduling

Non-overlapping intervals (LC 435 Minimum Number of Intervals to Remove, LC 452 Minimum Arrows for Balloons): sort by end time. Greedy: always select the interval ending earliest — this leaves the maximum room for future intervals. For LC 435: count non-overlapping intervals by greedily selecting the earliest-ending ones. Intervals to remove = total – count. For LC 452 (balloons): count the minimum number of arrows. Sort by end. One arrow at the end of the first interval pops all overlapping balloons. Move to the first non-overlapping balloon and repeat.

def min_arrows(points):
    points.sort(key=lambda x: x[1])  # sort by end
    arrows = 1
    end = points[0][1]
    for start, finish in points[1:]:
        if start > end:  # new non-overlapping group
            arrows += 1
            end = finish
    return arrows

Pattern 2: Jump Game (LC 55, 45)

LC 55 (can reach end?): track max_reach. At each position i, if i > max_reach: stuck. Otherwise: max_reach = max(max_reach, i + nums[i]). If max_reach >= n-1: return True. LC 45 (minimum jumps): greedy BFS-style. Track current_end (farthest reach of current jump) and farthest (farthest reach of next jump). When i reaches current_end: increment jumps, set current_end = farthest. This is O(n) vs O(n^2) DP.

def jump(nums):
    jumps = 0
    current_end = farthest = 0
    for i in range(len(nums) - 1):
        farthest = max(farthest, i + nums[i])
        if i == current_end:
            jumps += 1
            current_end = farthest
    return jumps

Pattern 3: Task Scheduler (LC 621)

Given tasks with frequencies and a cooldown n between same-task types: find minimum time to complete all tasks. Greedy: always run the most frequent remaining task. Formula: compute max_count = most frequent task’s count. max_count_tasks = number of tasks with that frequency. Result = max((max_count – 1) * (n + 1) + max_count_tasks, total_tasks). Intuition: arrange the most frequent task in “blocks” of (n+1): [A, _, _, A, _, _, A]. Fill blanks with other tasks. If there are enough tasks to fill all blanks: no idle time needed; result = total_tasks. Otherwise: idle time = (max_count – 1) * (n + 1) + max_count_tasks – total_non_max_tasks.

Pattern 4: Huffman Coding / Greedy on Heap

Build an optimal prefix-free encoding: merge the two least-frequent symbols repeatedly. Use a min-heap. Extract two minimums, combine (sum their frequencies), re-insert. Repeat until one node remains. This is a greedy algorithm: merging the two smallest at each step is provably optimal (exchange argument: swapping the smallest pair with any other pair would not decrease the total encoding cost). O(n log n) time. Pattern generalizes: whenever you need to greedily combine the two smallest values, a min-heap is the right data structure. LC 1167 (Minimum Cost to Connect Sticks): same algorithm — merge the two shortest sticks, pay the combined length as the cost. LC 502 (IPO): two-heap problem — one min-heap for capital requirements, one max-heap for profits. Greedily take the highest-profit project you can currently afford.

Pattern 5: Candy / Allocation Problems

LC 135 (Candy): each child gets at least 1 candy; children with higher ratings than neighbors get more. Two-pass greedy: left-to-right pass enforces the “higher than left neighbor” rule. Right-to-left pass enforces the “higher than right neighbor” rule. Take the max of both passes. Two passes is a common pattern for problems with constraints from both directions.

{
“@context”: “https://schema.org”,
“@type”: “FAQPage”,
“mainEntity”: [
{
“@type”: “Question”,
“name”: “How do you prove that a greedy algorithm is correct?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “The exchange argument proof: assume the greedy solution G is not optimal. Let OPT be an optimal solution. Find the first position where G and OPT differ. Show that you can modify OPT at that position to match G’s choice without making OPT worse. Formally: if OPT has choice B at position i where G has choice A (the greedy choice), show that swapping B for A in OPT does not increase the cost (or decrease the value). After the swap, OPT is at least as good and agrees with G at position i. Repeat this argument for all positions — OPT converges to G without getting worse. This proves G is optimal. Example for interval scheduling: if OPT picks interval B (ends later) where G picks interval A (ends earlier), swapping B for A in OPT keeps the same number of intervals scheduled (A ends no later than B, so A conflicts with nothing that B didn’t conflict with). This proves sorting by end time and greedily picking non-overlapping intervals is optimal.”
}
},
{
“@type”: “Question”,
“name”: “What is the difference between the greedy approach for LC 55 (Jump Game I) and LC 45 (Jump Game II)?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “LC 55 asks: can you reach the end? Greedy: track max_reach (the farthest index reachable from any index seen so far). For each index i: if i > max_reach, you’re stuck — return False. Otherwise: max_reach = max(max_reach, i + nums[i]). If max_reach >= n-1 at any point: return True. O(n) time, O(1) space. LC 45 asks: what is the minimum number of jumps? BFS-level greedy: treat each “jump” as one BFS level. current_end = farthest reachable with current number of jumps. farthest = farthest reachable with one more jump. When i reaches current_end: increment jumps, set current_end = farthest (we must jump to extend our reach). This simulates BFS levels without a queue — O(n) time, O(1) space vs O(n) space for explicit BFS. Common mistake on LC 45: including the last index in the loop and over-counting jumps. Loop only up to len(nums)-1 since you don’t need to jump from the last position.”
}
},
{
“@type”: “Question”,
“name”: “How does the task scheduler problem (LC 621) use greedy thinking?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “The key insight: the most frequent task determines the minimum time. Say the most frequent task A appears max_count times. We must have at least max_count-1 gaps of length n between consecutive A’s, plus one final A. Minimum frame: (max_count-1) * (n+1) + 1. If multiple tasks tie for max frequency (max_count_tasks of them): minimum frame = (max_count-1) * (n+1) + max_count_tasks. If total tasks are enough to fill all idle slots: result = total_tasks (no idle time). Final formula: max((max_count-1) * (n+1) + max_count_tasks, total_tasks). The greedy intuition: you don’t need to simulate the scheduler. Just calculate how many idle slots the most frequent task forces, then check if other tasks can fill those slots. If there are more tasks than slots: no idle time is needed and the answer is simply the total number of tasks. This avoids simulating the priority queue approach (which also works but is O(n log k) vs O(n)).”
}
},
{
“@type”: “Question”,
“name”: “What greedy approach solves the gas station problem (LC 134)?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “LC 134: circular array of gas stations with gas[i] available and cost[i] to travel to the next station. Find the starting station to complete the circuit, or -1 if impossible. Two observations: (1) If sum(gas) = sum(cost): a solution exists and the greedy finds it. Algorithm: traverse stations linearly. Track current_tank = current surplus. If current_tank = heap minimum (i.e., the earliest-finishing interval has ended): pop from the heap (that resource is now free). Push the current interval’s end time onto the heap. The heap size at any point = number of resources in use. Maximum heap size over all intervals = answer. Why a min-heap: we always want to reuse the resource that became free earliest. O(n log n) time (sort + heap operations). Alternative for counting overlaps: create events: +1 at each start, -1 at each end. Sort events (with tie-breaking: end before start at same time). Sweep through, track running sum. Maximum running sum = answer. Both approaches are O(n log n). The heap approach is more intuitive for interviews.”
}
}
]
}

Asked at: Meta Interview Guide

Asked at: Apple Interview Guide

Asked at: Coinbase Interview Guide

Asked at: Atlassian Interview Guide

Scroll to Top