Why Concurrency in Interviews?
Concurrency problems appear at senior-level interviews (SDE-2 and above) at companies like Apple, Google, and Databricks. They test your understanding of thread safety, race conditions, deadlocks, and synchronization primitives. Common problems: design a thread-safe data structure, implement a bounded blocking queue, solve the producer-consumer problem, and implement a concurrent LRU cache.
Race Conditions and Mutual Exclusion
A race condition occurs when multiple threads access shared state concurrently and the result depends on the timing of their execution. Classic example: counter += 1 is not atomic — it involves three operations (read, increment, write). Two threads executing simultaneously may both read the same value and both write an incremented value, losing one increment. Solution: mutual exclusion (mutex/lock). Only one thread can hold the lock at a time — other threads block until it is released.
import threading
class ThreadSafeCounter:
def __init__(self):
self._count = 0
self._lock = threading.Lock()
def increment(self):
with self._lock:
self._count += 1
def get(self):
with self._lock:
return self._count
Producer-Consumer Problem
Producers add items to a shared buffer; consumers remove and process them. The buffer has a fixed capacity. Producers must wait if the buffer is full; consumers must wait if it is empty. Solution: bounded blocking queue using a mutex and two condition variables (not_full and not_empty).
import threading
from collections import deque
class BoundedBlockingQueue:
def __init__(self, capacity):
self._queue = deque()
self._capacity = capacity
self._lock = threading.Lock()
self._not_full = threading.Condition(self._lock)
self._not_empty = threading.Condition(self._lock)
def enqueue(self, item):
with self._not_full:
while len(self._queue) >= self._capacity:
self._not_full.wait()
self._queue.append(item)
self._not_empty.notify()
def dequeue(self):
with self._not_empty:
while not self._queue:
self._not_empty.wait()
item = self._queue.popleft()
self._not_full.notify()
return item
Deadlock: Four Necessary Conditions
A deadlock occurs when two or more threads are each waiting for a resource held by the other. Four necessary conditions (all must hold for deadlock): (1) Mutual exclusion — at least one resource is held exclusively. (2) Hold and wait — a thread holds at least one resource while waiting for another. (3) No preemption — resources cannot be forcibly taken. (4) Circular wait — a cycle exists in the wait graph. Deadlock prevention by breaking any one condition:
- Break circular wait: establish a global ordering of resources; always acquire locks in the same order. Thread 1 and Thread 2 both must acquire lock_A then lock_B — no cycle possible.
- Break hold and wait: acquire all needed locks at once (atomic acquisition) or release all locks before trying again.
- Timeout with retry: tryLock(timeout) — if the lock is not acquired within the timeout, release all held locks and retry.
Read-Write Locks
When reads are much more frequent than writes, a standard mutex is overly restrictive — multiple concurrent readers are safe if there are no writers. A read-write lock allows: multiple concurrent readers, OR a single exclusive writer (no readers). Python’s threading module doesn’t include RWLock natively — use the rwlock library or implement via threading.Condition. Java has ReentrantReadWriteLock. Use case: cache with frequent reads and rare updates, configuration service, DNS cache.
Semaphore
A semaphore maintains a count. acquire() decrements the count (blocks if count = 0). release() increments the count. Binary semaphore (count 0 or 1) behaves like a mutex but can be released by a different thread (unlike mutex, which must be released by the holder). Counting semaphore: limits concurrent access to N threads simultaneously — rate limiting at the thread level, connection pool management (max N concurrent DB connections).
import threading
pool = threading.Semaphore(5) # max 5 concurrent DB connections
def query_database():
with pool: # acquires semaphore (blocks if 5 already held)
# ... execute DB query ...
pass # semaphore released on exit
Concurrent LRU Cache
A thread-safe LRU cache requires: a hash map (key → node) for O(1) lookup, a doubly linked list (maintains LRU order), and a lock to protect both. Coarse-grained locking: a single lock protects all operations. Simple but limits concurrency. Fine-grained locking: separate locks for the hash map and linked list (complex, prone to bugs). Segmented locking: divide the cache into N segments, each with its own lock — reduces contention by N. Java’s ConcurrentHashMap uses this approach with 16 segments by default.
Interview Classic: Print in Order (LC 1114)
Three threads must print “first”, “second”, “third” in order. Use two semaphores: sem1 (initial=0) gates “second”; sem2 (initial=0) gates “third”. Thread 1: print “first”, release sem1. Thread 2: acquire sem1, print “second”, release sem2. Thread 3: acquire sem2, print “third”. Semaphores with initial=0 act as event signals — the acquire blocks until another thread releases.
Java-Specific: volatile, synchronized, Atomic
volatile: ensures visibility — reads/writes to a volatile variable are always from/to main memory, not thread-local caches. Does NOT provide atomicity (volatile int i; i++ is still not atomic). Use for boolean flags (stop = true to signal a thread to stop).
synchronized: intrinsic lock on an object. synchronized method locks on this. synchronized(obj) block locks on obj. Reentrant — a thread can re-acquire a lock it already holds.
Atomic classes (AtomicInteger, AtomicLong, AtomicReference): provide lock-free atomic operations using CAS (Compare-And-Swap) hardware instructions. AtomicInteger.incrementAndGet() is atomic without a lock — faster than synchronized for simple operations under contention.
{
“@context”: “https://schema.org”,
“@type”: “FAQPage”,
“mainEntity”: [
{
“@type”: “Question”,
“name”: “What are the four conditions necessary for a deadlock and how do you prevent it?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “A deadlock requires all four conditions simultaneously: (1) Mutual exclusion u2014 at least one resource can only be held by one thread at a time. (2) Hold and wait u2014 a thread holds at least one resource while blocked waiting for additional resources held by other threads. (3) No preemption u2014 resources cannot be forcibly taken from a holding thread; it must voluntarily release. (4) Circular wait u2014 a cycle exists in the wait-for graph: Thread A waits for Thread B, Thread B waits for Thread A. Prevention: break any one condition. Most practical: break circular wait by imposing a global total ordering on all locks and always acquiring them in that order. Example: if your system has locks L1, L2, L3, every thread must acquire them in order L1u2192L2u2192L3, never in reverse. This prevents cycles. If thread 1 holds L1 and waits for L2, and thread 2 holds L1 first (blocked by thread 1), they cannot create a cycle. Break hold and wait: attempt to acquire all needed locks atomically using tryLock(). If any acquisition fails, release all held locks and retry after a random backoff u2014 avoids deadlock at the cost of potential livelock (threads keep retrying indefinitely). Break no preemption: implement lock timeout u2014 if a lock is not acquired within T milliseconds, abort the current operation and retry.”
}
},
{
“@type”: “Question”,
“name”: “How do you implement a thread-safe bounded blocking queue in Python?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “A bounded blocking queue allows up to N items. Enqueue blocks when full; dequeue blocks when empty. Implementation using threading.Condition (which internally uses a Lock): the queue maintains the actual deque, a lock for mutual exclusion, and two Condition objects sharing that lock u2014 not_full (signals when space becomes available) and not_empty (signals when an item becomes available). Enqueue: acquire the not_full condition, wait while the queue is at capacity (this atomically releases the lock and waits; the lock is reacquired before wait() returns), append the item, notify not_empty (signal any waiting consumer). Dequeue: acquire the not_empty condition, wait while the queue is empty, remove and return the first item, notify not_full. The key threading.Condition.wait() contract: atomically releases the lock and suspends the thread; reacquires the lock when notified. The while loop (not if) around wait() handles spurious wakeups (wait() may return without notification in some implementations) and re-checks the condition after each wakeup. Java equivalent: BlockingQueue interface (LinkedBlockingQueue, ArrayBlockingQueue) provides this implementation; the Condition pattern mirrors Java’s ReentrantLock + Condition approach.”
}
},
{
“@type”: “Question”,
“name”: “What is the difference between a mutex, semaphore, and monitor?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “A mutex (mutual exclusion lock) provides exclusive access to a shared resource. Only the thread that acquired the mutex can release it (ownership semantics). A mutex has two states: locked or unlocked. If thread A holds the mutex, thread B’s lock() call blocks until A calls unlock(). Python’s threading.Lock and Java’s synchronized are mutexes. A semaphore maintains a count and has two operations: acquire() (decrement count, block if 0) and release() (increment count, wake a blocked thread). A binary semaphore (initial count=1) acts like a mutex but without ownership u2014 any thread can release it, even one that didn’t acquire it. A counting semaphore (initial count=N) allows up to N concurrent holders u2014 useful for connection pools, rate limiting threads, and producer-consumer synchronization (two semaphores: one for “items available”, one for “space available”). A monitor is a higher-level concurrency primitive combining a mutex + condition variables + the data they protect. Java’s synchronized blocks are monitors u2014 every Java object has an intrinsic monitor. Python’s threading.Condition is a monitor. The monitor ensures that condition variable operations (wait, notify) are always performed while holding the associated mutex, preventing race conditions in condition checking. Key distinction: semaphore is about counting resource availability; mutex is about exclusive ownership; monitor is about conditional waiting within a critical section.”
}
}
]
}
Companies that ask this: Coinbase Interview Guide