What Is Dynamic Programming?
Dynamic programming (DP) solves problems by breaking them into overlapping subproblems, solving each subproblem once, and storing the result (memoization or tabulation). DP applies when a problem has two properties: optimal substructure (the optimal solution contains optimal solutions to subproblems) and overlapping subproblems (the same subproblems are solved multiple times in a naive recursion). DP problems appear in roughly 15-20% of FAANG coding interviews.
How to Recognize a DP Problem
- The problem asks for a count, minimum, maximum, or yes/no across all possibilities
- Making a choice at each step affects future choices (not greedy)
- Keywords: “number of ways”, “minimum cost”, “longest”, “can you reach”, “how many paths”
- The naive recursive solution has exponential time but the same subproblems repeat
Pattern 1: 1D DP (Linear)
Climbing Stairs / Fibonacci
# How many ways to climb n stairs (1 or 2 steps at a time)?
def climb_stairs(n: int) -> int:
if n <= 2: return n
dp = [0] * (n + 1)
dp[1], dp[2] = 1, 2
for i in range(3, n + 1):
dp[i] = dp[i-1] + dp[i-2]
return dp[n]
# Space optimized: only need last 2 values
House Robber
# Max sum with no two adjacent elements
def rob(nums: list) -> int:
prev2, prev1 = 0, 0
for num in nums:
curr = max(prev1, prev2 + num)
prev2, prev1 = prev1, curr
return prev1
Pattern 2: 0/1 Knapsack
Given items with weights and values, maximize value within a weight limit. Each item is taken or not taken (0/1). The state is dp[i][w] = max value using first i items with capacity w.
def knapsack(weights, values, capacity):
n = len(weights)
dp = [[0] * (capacity + 1) for _ in range(n + 1)]
for i in range(1, n + 1):
for w in range(capacity + 1):
# Skip item i-1
dp[i][w] = dp[i-1][w]
# Take item i-1 (if it fits)
if weights[i-1] <= w:
dp[i][w] = max(dp[i][w],
dp[i-1][w - weights[i-1]] + values[i-1])
return dp[n][capacity]
# Space optimization: use 1D array, iterate w backwards
def knapsack_1d(weights, values, capacity):
dp = [0] * (capacity + 1)
for i in range(len(weights)):
for w in range(capacity, weights[i]-1, -1): # reverse to avoid reuse
dp[w] = max(dp[w], dp[w - weights[i]] + values[i])
return dp[capacity]
Pattern 3: Longest Common Subsequence (LCS) Family
# LCS: longest common subsequence of two strings
def lcs(s1: str, s2: str) -> int:
m, n = len(s1), len(s2)
dp = [[0] * (n + 1) for _ in range(m + 1)]
for i in range(1, m + 1):
for j in range(1, n + 1):
if s1[i-1] == s2[j-1]:
dp[i][j] = dp[i-1][j-1] + 1
else:
dp[i][j] = max(dp[i-1][j], dp[i][j-1])
return dp[m][n]
# Longest Increasing Subsequence (LIS): O(n^2)
def lis(nums: list) -> int:
n = len(nums)
dp = [1] * n # each element is a subsequence of length 1
for i in range(1, n):
for j in range(i):
if nums[j] int:
tails = []
for num in nums:
pos = bisect.bisect_left(tails, num)
if pos == len(tails): tails.append(num)
else: tails[pos] = num
return len(tails)
Pattern 4: 2D DP (Grid)
# Unique paths in m x n grid (only right or down)
def unique_paths(m: int, n: int) -> int:
dp = [[1] * n for _ in range(m)]
for i in range(1, m):
for j in range(1, n):
dp[i][j] = dp[i-1][j] + dp[i][j-1]
return dp[m-1][n-1]
# Minimum path sum in grid with obstacles
def min_path_sum(grid):
m, n = len(grid), len(grid[0])
dp = [[float("inf")] * n for _ in range(m)]
dp[0][0] = grid[0][0]
for i in range(m):
for j in range(n):
if i == 0 and j == 0: continue
top = dp[i-1][j] if i > 0 else float("inf")
left = dp[i][j-1] if j > 0 else float("inf")
dp[i][j] = grid[i][j] + min(top, left)
return dp[m-1][n-1]
Pattern 5: Interval DP
# Burst Balloons: dp[i][j] = max coins from bursting all balloons between i and j
def max_coins(nums: list) -> int:
nums = [1] + nums + [1] # add boundary sentinels
n = len(nums)
dp = [[0] * n for _ in range(n)]
for length in range(2, n):
for left in range(n - length):
right = left + length
for k in range(left + 1, right):
dp[left][right] = max(
dp[left][right],
nums[left] * nums[k] * nums[right] + dp[left][k] + dp[k][right]
)
return dp[0][n-1]
Top DP Interview Problems by Pattern
- 1D DP: Climbing Stairs, House Robber, Coin Change, Jump Game, Word Break
- Knapsack: Partition Equal Subset Sum, Target Sum, Ones and Zeroes
- LCS family: Edit Distance, Distinct Subsequences, Interleaving String
- Grid DP: Unique Paths, Minimum Path Sum, Dungeon Game, Cherry Pickup
- Interval DP: Burst Balloons, Matrix Chain Multiplication, Palindrome Partitioning
- Tree DP: House Robber III, Binary Tree Cameras, Maximum Path Sum
Top-Down vs Bottom-Up
Top-down (memoization): write the recursive solution, add a cache (dict or lru_cache). Natural to write, easy to reason about, only computes needed subproblems. Bottom-up (tabulation): fill a DP table iteratively from base cases. Usually faster in practice (no recursion overhead), enables space optimization (rolling array). In interviews: start with top-down recursion + memoization to get a correct solution quickly, then optimize to bottom-up if asked about space complexity.
Frequently Asked Questions
What are the most common dynamic programming patterns in coding interviews?
Five patterns cover roughly 80% of DP interview problems: (1) 1D linear DP: each state depends on a few previous states. Problems: Fibonacci, Climbing Stairs, House Robber, Coin Change. (2) 0/1 Knapsack: binary choice per item, optimize over capacity. Problems: Partition Equal Subset Sum, Target Sum, Boolean DP. (3) LCS family: 2D DP on two sequences. Problems: Longest Common Subsequence, Edit Distance, Interleaving String. (4) Grid DP: 2D table filled left-to-right, top-to-bottom. Problems: Unique Paths, Minimum Path Sum, Dungeon Game. (5) Interval DP: state is a range [i,j]; solve smaller ranges before larger. Problems: Burst Balloons, Matrix Chain Multiplication, Palindrome Partitioning. Recognizing which pattern applies is the key skill.
How do you approach a new dynamic programming problem?
Use this four-step framework: (1) Identify that DP applies: does the problem ask for count/min/max across all possibilities? Do choices affect future choices? Would brute force have exponential time with repeated subproblems? (2) Define the state: what information do you need to represent each subproblem? dp[i] = answer for first i elements, dp[i][j] = answer for substring i..j, etc. The state definition is the hardest part. (3) Write the recurrence: how does dp[i] relate to smaller subproblems? (4) Identify base cases and fill order. Start with top-down (recursion + memoization) to get a correct solution quickly. Then optimize to bottom-up (iterative table) if space is a concern — often you can reduce space from O(n^2) to O(n) by only keeping the last row.
What is the difference between memoization and tabulation in DP?
Memoization (top-down): write the natural recursive solution and add a cache. When the function is called with the same arguments twice, return the cached result instead of recomputing. Use a dictionary or @lru_cache. Only computes subproblems that are actually needed — can be faster when many subproblems are unreachable. Natural to write and reason about. Tabulation (bottom-up): fill a DP table iteratively starting from base cases. Every cell is computed in dependency order. No recursion overhead (no call stack risk). Allows space optimization — once cells are no longer needed, their memory can be reused (rolling array). In interviews, memoization is faster to write and gets a correct solution quickly. Tabulation shows deeper understanding and is preferred for production code. Most interviewers accept either.
{ “@context”: “https://schema.org”, “@type”: “FAQPage”, “mainEntity”: [ { “@type”: “Question”, “name”: “What are the most common dynamic programming patterns in coding interviews?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “Five patterns cover roughly 80% of DP interview problems: (1) 1D linear DP: each state depends on a few previous states. Problems: Fibonacci, Climbing Stairs, House Robber, Coin Change. (2) 0/1 Knapsack: binary choice per item, optimize over capacity. Problems: Partition Equal Subset Sum, Target Sum, Boolean DP. (3) LCS family: 2D DP on two sequences. Problems: Longest Common Subsequence, Edit Distance, Interleaving String. (4) Grid DP: 2D table filled left-to-right, top-to-bottom. Problems: Unique Paths, Minimum Path Sum, Dungeon Game. (5) Interval DP: state is a range [i,j]; solve smaller ranges before larger. Problems: Burst Balloons, Matrix Chain Multiplication, Palindrome Partitioning. Recognizing which pattern applies is the key skill.” } }, { “@type”: “Question”, “name”: “How do you approach a new dynamic programming problem?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “Use this four-step framework: (1) Identify that DP applies: does the problem ask for count/min/max across all possibilities? Do choices affect future choices? Would brute force have exponential time with repeated subproblems? (2) Define the state: what information do you need to represent each subproblem? dp[i] = answer for first i elements, dp[i][j] = answer for substring i..j, etc. The state definition is the hardest part. (3) Write the recurrence: how does dp[i] relate to smaller subproblems? (4) Identify base cases and fill order. Start with top-down (recursion + memoization) to get a correct solution quickly. Then optimize to bottom-up (iterative table) if space is a concern — often you can reduce space from O(n^2) to O(n) by only keeping the last row.” } }, { “@type”: “Question”, “name”: “What is the difference between memoization and tabulation in DP?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “Memoization (top-down): write the natural recursive solution and add a cache. When the function is called with the same arguments twice, return the cached result instead of recomputing. Use a dictionary or @lru_cache. Only computes subproblems that are actually needed — can be faster when many subproblems are unreachable. Natural to write and reason about. Tabulation (bottom-up): fill a DP table iteratively starting from base cases. Every cell is computed in dependency order. No recursion overhead (no call stack risk). Allows space optimization — once cells are no longer needed, their memory can be reused (rolling array). In interviews, memoization is faster to write and gets a correct solution quickly. Tabulation shows deeper understanding and is preferred for production code. Most interviewers accept either.” } } ] }