Backtracking Algorithm Interview Patterns: Subsets, Permutations, N-Queens, Word Search

Backtracking Algorithm Patterns

Backtracking is a systematic way to enumerate all possible solutions by building candidates incrementally and abandoning (pruning) partial candidates the moment they cannot lead to a valid solution. It is the go-to technique for constraint satisfaction problems, permutations, combinations, and puzzle solving.

  • Coinbase Interview Guide
  • Atlassian Interview Guide
  • Uber Interview Guide
  • LinkedIn Interview Guide
  • Apple Interview Guide
  • Meta Interview Guide
  • The Universal Backtracking Template

    def backtrack(state, choices):
        if is_solution(state):
            result.append(state.copy())
            return
        for choice in choices:
            if is_valid(state, choice):
                make_choice(state, choice)
                backtrack(state, remaining_choices(choices, choice))
                undo_choice(state, choice)   # backtrack
    

    The key insight: always undo (restore) state after recursing. This is what distinguishes backtracking from plain DFS.

    Pattern 1: Subsets

    def subsets(nums):
        result = []
        def bt(start, current):
            result.append(current[:])
            for i in range(start, len(nums)):
                current.append(nums[i])
                bt(i + 1, current)
                current.pop()
        bt(0, [])
        return result
    

    Time: O(2^N * N) — 2^N subsets, each takes O(N) to copy. The start index ensures each element is considered only once, preventing duplicates.

    Pattern 2: Permutations

    def permutations(nums):
        result = []
        def bt(current, remaining):
            if not remaining:
                result.append(current[:])
                return
            for i in range(len(remaining)):
                current.append(remaining[i])
                bt(current, remaining[:i] + remaining[i+1:])
                current.pop()
        bt([], nums)
        return result
    

    Time: O(N! * N). For duplicates: sort first, skip if nums[i] == nums[i-1] and nums[i-1] not used. Use a visited boolean array to track used indices.

    Pattern 3: Combinations

    def combinations(n, k):
        result = []
        def bt(start, current):
            if len(current) == k:
                result.append(current[:])
                return
            # Pruning: not enough elements remaining
            remaining = n - start + 1
            needed = k - len(current)
            if remaining < needed: return
            for i in range(start, n + 1):
                current.append(i)
                bt(i + 1, current)
                current.pop()
        bt(1, [])
        return result
    

    The pruning condition “if remaining < needed: return" is critical — it cuts branches where there are not enough numbers left to fill the combination.

    Pattern 4: N-Queens

    def solve_n_queens(n):
        result = []
        cols = set(); diag1 = set(); diag2 = set()
        board = [['.' for _ in range(n)] for _ in range(n)]
        def bt(row):
            if row == n:
                result.append([''.join(r) for r in board])
                return
            for col in range(n):
                if col in cols or (row-col) in diag1 or (row+col) in diag2:
                    continue
                cols.add(col); diag1.add(row-col); diag2.add(row+col)
                board[row][col] = 'Q'
                bt(row + 1)
                board[row][col] = '.'
                cols.remove(col); diag1.remove(row-col); diag2.remove(row+col)
        bt(0)
        return result
    

    The three sets track occupied columns and diagonals. Row-col identifies the “/” diagonal; row+col identifies the “” diagonal. This gives O(1) conflict checking.

    Pattern 5: Word Search

    def exist(board, word):
        rows, cols = len(board), len(board[0])
        def bt(r, c, idx):
            if idx == len(word): return True
            if r = rows or c = cols: return False
            if board[r][c] != word[idx]: return False
            temp, board[r][c] = board[r][c], '#'  # mark visited
            found = (bt(r+1,c,idx+1) or bt(r-1,c,idx+1) or
                     bt(r,c+1,idx+1) or bt(r,c-1,idx+1))
            board[r][c] = temp  # restore
            return found
        for r in range(rows):
            for c in range(cols):
                if bt(r, c, 0): return True
        return False
    

    When to Use Backtracking

    • Enumerate all solutions: subsets, permutations, combinations
    • Constraint satisfaction: Sudoku, N-Queens, crossword fill
    • Puzzle solving: maze traversal with revisit restriction, word search
    • Partition problems: partition to equal subset sum, palindrome partitioning

    Pruning Strategies

    • Feasibility check: if current path cannot lead to solution (length exceeded, constraint violated), return early
    • Sort input: enables stopping early when value exceeds target (combination sum with duplicates)
    • Constraint propagation: N-Queens column/diagonal sets for O(1) validity checks

    {
    “@context”: “https://schema.org”,
    “@type”: “FAQPage”,
    “mainEntity”: [
    {
    “@type”: “Question”,
    “name”: “What is the time complexity of backtracking algorithms and how do you communicate it in interviews?”,
    “acceptedAnswer”: { “@type”: “Answer”, “text”: “Backtracking worst-case complexity follows the branching factor and depth: O(branching_factor ^ depth). For subsets: 2^N choices (include or exclude each element) * O(N) to copy = O(2^N * N). For permutations: N! orderings * O(N) to copy = O(N! * N). For combinations of K from N: C(N,K) * O(K). For N-Queens: not all N^N placements are valid — with column/diagonal pruning the actual branching factor is much lower, roughly O(N!). In interviews: state the worst case, then explain your pruning and why it reduces average-case performance significantly. For combination sum with sorted input: "once the running sum exceeds target, we stop exploring — this prunes a large fraction of the tree." The examiner cares that you recognize the exponential worst case AND that you apply pruning to make it practical. Never claim a backtracking algorithm is polynomial without a proof.” }
    },
    {
    “@type”: “Question”,
    “name”: “How do you handle duplicates in backtracking problems like Subsets II and Permutations II?”,
    “acceptedAnswer”: { “@type”: “Answer”, “text”: “Duplicates in backtracking require sorting the input first, then skipping duplicate elements at the same recursion level. For Subsets II: after sorting, in the loop "for i in range(start, len(nums))", add: "if i > start and nums[i] == nums[i-1]: continue". The condition "i > start" (not "i > 0") is critical — it only skips duplicates at the same level, not across levels. Example: [1,2,2]. At level 0: choose 1 (fine), then choose first 2 (fine), then skip second 2 (i > start and nums[2] == nums[1]). For Permutations II: sort first. Use a visited[] boolean array. In the loop: skip if visited[i] is True, OR if nums[i] == nums[i-1] and NOT visited[i-1]. The second condition prevents generating [2a, 2b, 1] separately from [2b, 2a, 1] — by requiring we always use the earlier duplicate first, we break the symmetry.” }
    },
    {
    “@type”: “Question”,
    “name”: “What is the difference between backtracking and dynamic programming?”,
    “acceptedAnswer”: { “@type”: “Answer”, “text”: “Backtracking and DP are complementary, not competing. Backtracking: generates all solutions by exploring a search tree and pruning invalid branches early. Good for: enumeration problems (list all solutions), when solutions are structurally non-decomposable, or when pruning makes the search space manageable. Time: typically exponential worst case. Dynamic programming: stores the results of subproblems to avoid recomputation. Good for: optimization problems (find the best solution), when subproblems overlap (the same subproblem is solved multiple times in the recursion tree). Time: polynomial if subproblem count is polynomial. Key question: do you need all solutions or just the optimal one? If all solutions, backtracking. If the optimal, consider DP. Can they overlap? Yes: some problems use backtracking with memoization — if you recognize that the same state appears in multiple branches of the search tree, memoize the result (Sudoku solver, word break II).” }
    }
    ]
    }

    Scroll to Top