Dynamic programming (DP) is the most feared topic in coding interviews, yet it follows predictable patterns. Once you recognize the pattern, the solution structure writes itself. This guide covers the core DP patterns that appear in 90% of interview problems, with clear state transition formulas and implementation strategies that work in any programming language.
Recognizing a DP Problem
A problem is a candidate for DP when it has two properties: (1) Optimal substructure — the optimal solution to the problem contains optimal solutions to subproblems. The shortest path from A to C through B contains the shortest path from A to B and the shortest path from B to C. (2) Overlapping subproblems — the same subproblems are solved multiple times. Fibonacci: F(5) requires F(4) and F(3). F(4) requires F(3) and F(2). F(3) is computed twice. Interview signal words: “minimum cost”, “maximum profit”, “number of ways”, “longest/shortest”, “is it possible” (yes/no feasibility). If the brute force is exponential and the problem has these properties, DP will reduce it to polynomial time.
Memoization (Top-Down) vs Tabulation (Bottom-Up)
Two implementation approaches: (1) Memoization (top-down) — write the recursive solution, then cache results. Start from the final answer and recurse down to base cases. Use a hash map or array to store computed results. If the result for the current state is already cached, return it immediately. Advantages: only computes needed subproblems (lazy evaluation), easier to write (natural recursive thinking). Disadvantages: recursion stack overhead, risk of stack overflow for deep recursion. (2) Tabulation (bottom-up) — fill a table iteratively from base cases up to the final answer. No recursion. Iterate through states in dependency order, computing each from previously computed values. Advantages: no recursion overhead, easier to optimize space (often can reduce from O(N^2) to O(N) by keeping only the previous row). Disadvantages: must determine the correct iteration order, computes all subproblems (even unused ones). Interview advice: start with memoization (easier to reason about), then convert to tabulation if the interviewer asks for optimization.
Pattern 1: 0/1 Knapsack
Problem: given N items with weights and values, and a knapsack with capacity W, maximize the total value without exceeding the capacity. Each item can be included at most once. State: dp[i][w] = maximum value using the first i items with capacity w. Transition: dp[i][w] = max(dp[i-1][w], dp[i-1][w – weight[i]] + value[i]) if weight[i] <= w, else dp[i][w] = dp[i-1][w]. The first option skips item i; the second includes it. Base case: dp[0][w] = 0 for all w (no items, no value). Time: O(N * W). Space: O(N * W), optimizable to O(W) since each row only depends on the previous row. Variants: subset sum (can you select items that sum to exactly T?), equal partition (can you split items into two sets with equal sum?), target sum (assign + or – to items to reach a target). All follow the same DP structure with different transition formulas.
Pattern 2: Longest Common Subsequence (LCS)
Problem: given two strings s1 and s2, find the length of their longest common subsequence. State: dp[i][j] = length of LCS of s1[0..i-1] and s2[0..j-1]. Transition: if s1[i-1] == s2[j-1], then dp[i][j] = dp[i-1][j-1] + 1 (characters match, extend the subsequence). Else dp[i][j] = max(dp[i-1][j], dp[i][j-1]) (skip one character from either string). Base case: dp[0][j] = 0, dp[i][0] = 0. Time: O(N * M). Space: O(N * M), optimizable to O(min(N, M)). To reconstruct the actual subsequence (not just the length), trace back from dp[N][M]: if s1[i-1] == s2[j-1], include this character and move diagonally; otherwise move in the direction of the larger value. Variants: longest common substring (consecutive characters — reset dp to 0 on mismatch instead of taking max), edit distance (Levenshtein — add costs for insert, delete, replace), shortest common supersequence.
Pattern 3: Linear DP (1D State)
Many DP problems have a single-dimensional state. Climbing stairs: dp[i] = number of ways to reach step i. Transition: dp[i] = dp[i-1] + dp[i-2] (take 1 or 2 steps). House robber: dp[i] = maximum money from robbing houses 0..i without robbing adjacent houses. Transition: dp[i] = max(dp[i-1], dp[i-2] + money[i]). Longest increasing subsequence (LIS): dp[i] = length of LIS ending at index i. Transition: dp[i] = max(dp[j] + 1) for all j < i where nums[j] < nums[i]. Time O(N^2), optimizable to O(N log N) with patience sorting (binary search on a tails array). Coin change: dp[amount] = minimum coins to make this amount. Transition: dp[amount] = min(dp[amount – coin] + 1) for each coin denomination. These problems share the pattern: define what dp[i] represents, find the transition from smaller states, and identify base cases.
Pattern 4: Interval DP
Interval DP operates on contiguous subarrays or substrings. State: dp[i][j] represents the answer for the interval [i, j]. Transition: try all possible split points k between i and j, combining dp[i][k] and dp[k+1][j]. Matrix chain multiplication: dp[i][j] = minimum cost to multiply matrices i through j. Transition: dp[i][j] = min(dp[i][k] + dp[k+1][j] + cost(i, k, j)) for all k in [i, j-1]. Burst balloons (LeetCode 312): dp[i][j] = maximum coins from bursting balloons in range [i, j]. Palindrome partitioning: dp[i][j] = minimum cuts to partition s[i..j] into palindromes. Iteration order for interval DP: iterate by interval length (len = 1, 2, …, N), then by start position (i = 0, 1, …, N-len). This ensures smaller intervals are computed before larger ones that depend on them. Time: O(N^3) for most interval DP problems.
The DP Problem-Solving Framework
Step-by-step framework for any DP problem in an interview: (1) Identify the state — what information do you need to make a decision? This becomes your DP array dimensions. (2) Define dp[state] — clearly state what value dp[i] (or dp[i][j]) represents. Write this down. (3) Find the transition — how do you compute dp[state] from smaller states? This is the recurrence relation. (4) Identify base cases — what are the smallest subproblems with known answers? (5) Determine iteration order — which states must be computed first? (6) Extract the answer — is it dp[N], max(dp[i]), or something else? Practice: for each pattern, solve 3-5 problems until the pattern is automatic. Knapsack variants: subset sum, partition equal subset, target sum, coin change II. LCS variants: edit distance, longest palindromic subsequence, shortest common supersequence. Linear DP: climbing stairs, house robber, longest increasing subsequence, word break. Interval DP: matrix chain multiplication, burst balloons, palindrome partitioning.
{ “@context”: “https://schema.org”, “@type”: “FAQPage”, “mainEntity”: [ { “@type”: “Question”, “name”: “How do you decide between memoization (top-down) and tabulation (bottom-up) for a DP problem?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “Start with memoization in interviews — it is easier to write because you start with the recursive brute force and add caching. Memoization advantages: (1) Only computes subproblems that are actually needed (lazy evaluation). For sparse state spaces where many subproblems are never reached, this saves time. (2) The recursive structure mirrors the problem definition, making it easier to reason about correctness. (3) Base cases are natural (the recursion stops when it reaches them). Switch to tabulation when: (1) The interviewer asks for space optimization. Tabulation makes it easy to see that you only need the previous row (or previous two values), reducing space from O(N*M) to O(M) or O(1). (2) The recursion depth would cause a stack overflow. Python default recursion limit is 1000. A DP problem with N=10000 needs tabulation or sys.setrecursionlimit. (3) You need to fill the entire DP table anyway (no sparse subproblems). In practice, the time complexity is the same for both approaches. The choice is about code clarity and space optimization.” } }, { “@type”: “Question”, “name”: “How do you identify the state and transition for a dynamic programming problem?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “State identification: ask yourself — what information do I need to make a decision at each step? The answer becomes your DP dimensions. For the knapsack problem: I need to know which items I have considered (index i) and the remaining capacity (w). State: dp[i][w]. For longest increasing subsequence: I need to know the position in the array. State: dp[i] = LIS length ending at index i. For edit distance: I need to know how far I have processed in each string. State: dp[i][j] = edit distance between s1[0..i-1] and s2[0..j-1]. Transition identification: at each state, what choices can I make? For knapsack at state (i, w): I can skip item i (dp[i-1][w]) or include it (dp[i-1][w-weight[i]] + value[i]). Take the max. For LIS at state i: try extending from every previous index j where nums[j] < nums[i]. dp[i] = max(dp[j] + 1). Common patterns: include/exclude (knapsack), extend from previous (LIS), match/skip characters (LCS, edit distance), try all split points (interval DP). Once you identify the pattern, the transition formula follows." } }, { "@type": "Question", "name": "What are the most common dynamic programming patterns in coding interviews?", "acceptedAnswer": { "@type": "Answer", "text": "Five patterns cover approximately 90% of interview DP problems: (1) 0/1 Knapsack — choose items to maximize value within a constraint. Variants: subset sum, partition equal subset, target sum, coin change (unbounded knapsack). State: dp[i][capacity]. (2) Longest Common Subsequence — compare two sequences. Variants: edit distance, longest palindromic subsequence, shortest common supersequence. State: dp[i][j] for positions in each sequence. (3) Linear DP — single sequence with decisions at each position. Problems: climbing stairs, house robber, longest increasing subsequence, word break, decode ways. State: dp[i] for position i. (4) Interval DP — optimal result for contiguous subarrays. Problems: matrix chain multiplication, burst balloons, palindrome partitioning, stone game. State: dp[i][j] for interval [i,j], try all split points k. (5) Grid DP — paths through a 2D grid. Problems: unique paths, minimum path sum, maximal square, dungeon game. State: dp[i][j] for cell (i,j). For each pattern, practice 3-5 problems until you can recognize and solve them within 20-30 minutes." } }, { "@type": "Question", "name": "How do you optimize the space complexity of a dynamic programming solution?", "acceptedAnswer": { "@type": "Answer", "text": "Space optimization exploits the observation that many DP transitions only depend on the immediately previous state. Technique 1: Rolling array — if dp[i] only depends on dp[i-1], keep only two rows instead of the full table. For the knapsack problem, dp[i][w] depends only on dp[i-1][w] and dp[i-1][w-weight]. Use two 1D arrays (prev_row and curr_row) instead of a 2D array. Space reduces from O(N*W) to O(W). Technique 2: Single array with reverse iteration — for 0/1 knapsack, iterate w from W down to weight[i]. This ensures dp[w-weight[i]] still holds the previous row value (not yet overwritten in this iteration). Space: O(W) with a single array. Technique 3: Constant space — if dp[i] depends only on dp[i-1] and dp[i-2] (like Fibonacci or climbing stairs), use two variables instead of an array. Space: O(1). Technique 4: For LCS and edit distance, if you only need the length (not the actual sequence), use two rows. If you need the sequence reconstruction, you need the full table or a divide-and-conquer approach. Always mention the unoptimized space complexity first in interviews, then offer the optimization. The interviewer wants to see that you understand the tradeoff." } } ] }