Divide and Conquer Interview Patterns: Merge Sort, Quick Select, Master Theorem

The Divide and Conquer Paradigm

Divide and conquer solves problems by: (1) Divide — split the problem into smaller subproblems of the same type. (2) Conquer — solve each subproblem recursively (base case: problem is small enough to solve directly). (3) Combine — merge the subproblem solutions into the overall solution. The approach yields O(n log n) algorithms for problems where naive approaches are O(n²) — sorting, closest pair of points, fast exponentiation, and matrix multiplication.

Merge Sort

Merge sort splits the array in half, recursively sorts each half, then merges the two sorted halves. The merge step is O(n) — two pointers scan both sorted halves and place elements in order. Total time: T(n) = 2T(n/2) + O(n) = O(n log n) by the Master Theorem. Space: O(n) for the merge buffer (unlike quicksort, which is O(log n) space for the call stack).

Key applications: (1) Count inversions (LC 315, 493): an inversion is a pair (i,j) where i<j but nums[i]>nums[j]. During merge, when elements from the right half are placed before elements from the left half, each such placement contributes (left half remaining) inversions. Modify merge to count during the merge step. (2) Sort linked list in O(n log n) — merge sort is preferred over quicksort for linked lists since random access is O(n) but merge is O(n) using two-pointer technique.

def mergeSort(nums):
    if len(nums) <= 1:
        return nums
    mid = len(nums) // 2
    left = mergeSort(nums[:mid])
    right = mergeSort(nums[mid:])
    return merge(left, right)

def merge(left, right):
    result = []
    i = j = 0
    while i < len(left) and j < len(right):
        if left[i] <= right[j]:
            result.append(left[i]); i += 1
        else:
            result.append(right[j]); j += 1
    return result + left[i:] + right[j:]

Quick Select (Kth Largest/Smallest)

Quick select finds the kth smallest element in O(n) average time using the partition step from quicksort. Partition around a pivot: elements smaller than pivot go left, larger go right. If the pivot lands at index k-1, it is the answer. If pivot index k-1, recurse on the left half. Unlike quicksort, only one side is recursed — average O(n) time (O(n²) worst case with bad pivot selection, mitigated by random pivot).

import random

def findKthSmallest(nums, k):
    def partition(left, right):
        pivot_idx = random.randint(left, right)
        nums[pivot_idx], nums[right] = nums[right], nums[pivot_idx]
        pivot = nums[right]
        store = left
        for i in range(left, right):
            if nums[i] <= pivot:
                nums[store], nums[i] = nums[i], nums[store]
                store += 1
        nums[store], nums[right] = nums[right], nums[store]
        return store

    left, right = 0, len(nums) - 1
    while left <= right:
        pivot_pos = partition(left, right)
        if pivot_pos == k - 1:
            return nums[pivot_pos]
        elif pivot_pos < k - 1:
            left = pivot_pos + 1
        else:
            right = pivot_pos - 1

Master Theorem

The Master Theorem solves recurrences of the form T(n) = a·T(n/b) + f(n), where a ≥ 1 subproblems of size n/b with combine cost f(n).

Let c = log_b(a) (critical exponent). Compare f(n) to n^c:

  • Case 1: f(n) = O(n^(c-ε)) for some ε > 0 — combining is cheap, subproblems dominate. T(n) = Θ(n^c). Example: binary search T(n) = T(n/2) + O(1) → a=1, b=2, c=0, f(n)=O(1)=O(n^0) → T(n) = Θ(log n).
  • Case 2: f(n) = Θ(n^c) — combining and subproblems are equal. T(n) = Θ(n^c × log n). Example: merge sort T(n) = 2T(n/2) + O(n) → a=2, b=2, c=1, f(n)=O(n)=O(n^1) → T(n) = Θ(n log n).
  • Case 3: f(n) = Ω(n^(c+ε)) — combining dominates. T(n) = Θ(f(n)). Example: if T(n) = T(n/2) + O(n) → a=1, b=2, c=0, f(n)=O(n)=Ω(n^(0+1)) → T(n) = Θ(n).

Binary Search (Divide and Conquer)

Binary search is divide and conquer on a sorted array: check the midpoint, recurse on the half that could contain the answer. T(n) = T(n/2) + O(1) = O(log n). The recursive formulation is pedagogically useful; the iterative implementation avoids stack overhead.

Closest Pair of Points

Finding the closest pair among n points: brute force is O(n²). Divide and conquer: sort by x-coordinate. Divide at midpoint. Recursively find closest pair in each half (distances d_left, d_right). Set d = min(d_left, d_right). The tricky part: check pairs straddling the divide — consider only points within distance d of the dividing line (the “strip”). Points in the strip are sorted by y-coordinate; for each point, only the next 7 points in y-order need to be checked (geometric proof). Total time O(n log n).

Fast Exponentiation (Repeated Squaring)

Compute x^n in O(log n) by: x^n = (x^(n/2))^2 if n is even; x × x^(n-1) if n is odd. This reduces the problem size by half each step. Used for modular exponentiation in cryptography (RSA key generation) and computing Fibonacci in O(log n) via matrix exponentiation.

Interview Problems Using Divide and Conquer

  • LC 23 — Merge K Sorted Lists: divide lists into pairs, merge pairs recursively. O(n log k).
  • LC 148 — Sort List: merge sort on linked list. O(n log n) time, O(log n) stack space.
  • LC 215 — Kth Largest Element: quick select. O(n) average.
  • LC 53 — Maximum Subarray: divide and conquer finds max subarray crossing the midpoint; O(n log n) — though Kadane’s algorithm is O(n) and preferred.
  • LC 169 — Majority Element: Boyer-Moore voting algorithm (O(n)) or divide and conquer (find majority in each half, then verify).
  • LC 315 — Count of Smaller Numbers After Self: merge sort with inversion counting.

{
“@context”: “https://schema.org”,
“@type”: “FAQPage”,
“mainEntity”: [
{
“@type”: “Question”,
“name”: “How does quick select find the Kth largest element in O(n) average time?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Quick select uses the partition step from quicksort: choose a pivot, rearrange the array so all elements pivot are to its right. The pivot ends up at its final sorted position. If the pivot’s position equals k-1 (0-indexed), it is the kth smallest element u2014 return it. If pivot position k-1, recurse on left only. Unlike quicksort which recurses on both halves, quick select recurses on only one half u2014 average work per level is O(n) + O(n/2) + O(n/4) + … = O(2n) = O(n). Worst case is O(nu00b2) with a consistently bad pivot (sorted array with always-first-element pivot). Mitigate with random pivot selection (randomized quick select) to make worst case exponentially unlikely. For guaranteed O(n) worst case, the median-of-medians pivot selection achieves O(n) deterministically u2014 but with a large constant factor, making randomized quick select preferred in practice.”
}
},
{
“@type”: “Question”,
“name”: “What is the Master Theorem and how do you apply it to algorithm analysis?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “The Master Theorem gives the time complexity of divide-and-conquer recurrences of the form T(n) = au00b7T(n/b) + f(n), where a u2265 1 subproblems of size n/b are solved recursively with f(n) work to combine. Compute the critical exponent c = log_b(a). Case 1: f(n) grows slower than n^c (f(n) = O(n^(c-u03b5))): T(n) = u0398(n^c) u2014 the recursive subproblems dominate. Case 2: f(n) grows at the same rate as n^c (f(n) = u0398(n^c log^k n)): T(n) = u0398(n^c log^(k+1) n) u2014 equal contribution. Case 3: f(n) grows faster than n^c (f(n) = u03a9(n^(c+u03b5))): T(n) = u0398(f(n)) u2014 the combine step dominates. Interview examples: Merge sort: T(n) = 2T(n/2) + O(n). a=2, b=2, c=log_2(2)=1, f(n)=O(n)=O(n^1) u2192 Case 2 u2192 T(n) = u0398(n log n). Binary search: T(n) = T(n/2) + O(1). a=1, b=2, c=log_2(1)=0, f(n)=O(1)=O(n^0) u2192 Case 2 u2192 T(n) = u0398(log n). Strassen matrix mult: T(n) = 7T(n/2) + O(n^2). c=log_2(7)u22482.81, f(n)=O(n^2) < O(n^2.81) u2192 Case 1 u2192 T(n) = u0398(n^2.81)."
}
},
{
"@type": "Question",
"name": "How does merge sort on a linked list differ from merge sort on an array?",
"acceptedAnswer": {
"@type": "Answer",
"text": "Merge sort is actually preferred over quicksort for linked lists. Finding the midpoint uses slow-fast pointers (tortoise and hare): slow moves 1 node per step, fast moves 2 u2014 when fast reaches the end, slow is at the midpoint. This is O(n) but avoids the O(n) random access that arrays allow via index. The merge step for linked lists requires no extra space u2014 instead of allocating a result array, rewire the next pointers of nodes: use a dummy head, advance through both lists comparing values, and append the smaller node by relinking next pointers. This makes linked list merge sort O(n log n) time and O(log n) space (only the recursion stack u2014 no merge buffer). Compare to array merge sort which requires O(n) extra space for the merge buffer. Quicksort on linked lists is problematic: the partition step requires O(n) time to reach the pivot's position, and swap operations require unlinking and relinking nodes u2014 making it messier than merge sort. Bottom-up merge sort on linked lists achieves O(1) extra space by iterating over merge sizes (1, 2, 4, 8, …) without recursion."
}
}
]
}

  • Coinbase Interview Guide
  • Atlassian Interview Guide
  • LinkedIn Interview Guide
  • Uber Interview Guide
  • Apple Interview Guide
  • Companies That Ask This

    Scroll to Top