Conditional Probability and Bayesian Interview Problems for Quant Roles
Conditional probability is the most heavily tested topic in quant-trading interviews after expected value. Jane Street, SIG, Citadel Securities, Optiver, Akuna, IMC, and most prop shops use Bayesian-flavored problems to probe whether candidates can reason cleanly about how new information should change a probability estimate. The skill matters in trading: every order flow, news event, or counterparty action is information, and updating your fair-value estimate quickly and correctly is most of what a market maker does mentally.
This guide walks through the framework, the canonical interview problems, and the patterns that distinguish strong candidates from candidates who memorize formulas without understanding when to apply them.
The Core Framework
Conditional probability is just: P(A | B) = P(A and B) / P(B). In words: the probability of A given B equals the joint probability of A and B, divided by the marginal probability of B.
Bayes’ theorem rearranges this:
P(A | B) = P(B | A) × P(A) / P(B)
Or in more useful form when computing posteriors from priors:
P(A | B) = P(B | A) × P(A) / [P(B | A) × P(A) + P(B | not A) × P(not A)]
For interview purposes, you don’t need to memorize this in symbolic form. You need to be fluent at setting up the calculation: identifying the prior, the likelihood under each hypothesis, and the marginal denominator. Most candidates get the formula right but mis-set-up the problem; the interviewer is watching for setup, not formula recall.
Canonical Problem 1: Disease Test
“A disease has 1% prevalence. A test has 99% sensitivity (P(positive | disease) = 0.99) and 99% specificity (P(negative | no disease) = 0.99). Someone tests positive. What’s P(disease | positive)?”
Setup: P(disease) = 0.01. P(positive | disease) = 0.99. P(positive | no disease) = 0.01.
Marginal: P(positive) = P(positive | disease) × P(disease) + P(positive | no disease) × P(no disease) = 0.99 × 0.01 + 0.01 × 0.99 = 0.0099 + 0.0099 = 0.0198.
Posterior: P(disease | positive) = 0.0099 / 0.0198 = 0.5.
The answer surprises most candidates: a 99%-accurate test only gives 50% confidence on a positive result, because the disease is rare. This is the canonical illustration of base rates.
Canonical Problem 2: Two Coins
“I have two coins. One is fair. The other comes up heads with probability 3/4. I pick one at random and flip it three times, getting HHH. What’s the probability I picked the biased coin?”
Prior: P(fair) = P(biased) = 1/2.
Likelihood: P(HHH | fair) = (1/2)^3 = 1/8. P(HHH | biased) = (3/4)^3 = 27/64.
Posterior: P(biased | HHH) = [P(HHH | biased) × P(biased)] / [P(HHH | biased) × P(biased) + P(HHH | fair) × P(fair)]
= (27/64 × 1/2) / (27/64 × 1/2 + 1/8 × 1/2) = (27/128) / (27/128 + 8/128) = 27/35 ≈ 0.77.
The prior was 50/50; three heads in a row updated us toward the biased coin to about 77%.
Canonical Problem 3: The Boy/Girl Paradox
“I have two children. At least one is a boy. What’s the probability both are boys?”
Prior space: {BB, BG, GB, GG}, each with probability 1/4.
Condition: “at least one is a boy” eliminates GG. Remaining: {BB, BG, GB}, each with conditional probability 1/3.
Answer: P(BB | at least one boy) = 1/3.
The trick: many candidates intuitively answer 1/2 because they unconsciously condition on “the first child is a boy” rather than “at least one of the two children is a boy.” The framing matters. The famous variation: “I have two children. At least one is a boy born on Tuesday. What’s the probability both are boys?” The answer becomes 13/27, much closer to 1/2, because the additional information (born Tuesday) makes the condition more specific and asymmetrically affects the joint distribution.
Canonical Problem 4: Monty Hall
Three doors, one with a prize. You pick door 1. Monty (who knows where the prize is) opens door 3 to reveal no prize. Should you switch to door 2?
Yes, switch. P(prize behind 2 | Monty opened 3) = 2/3, while P(prize behind 1 | Monty opened 3) = 1/3.
Why: the prior was 1/3 each. Monty’s choice of door 3 is informative — if the prize is behind 1, Monty could open either 2 or 3, but if the prize is behind 2, Monty must open 3. Likelihood of “Monty opens 3” is 1/2 if prize behind 1, 1 if prize behind 2, 0 if prize behind 3. Posterior weights are 1/3 × 1/2 vs 1/3 × 1 vs 0, normalized to 1/3 vs 2/3 vs 0.
This is conditional probability disguised as a game-show question. The interviewer wants to see you set up the likelihood correctly under each hypothesis.
Canonical Problem 5: Sequential Updating
“A coin is either fair or two-headed (50/50 prior). I flip it once and get heads. What’s the posterior probability it’s two-headed? Then I flip again and get heads. New posterior?”
After flip 1: P(2H | H) = (1 × 1/2) / (1 × 1/2 + 1/2 × 1/2) = 0.5 / 0.75 = 2/3.
After flip 2 (using 2/3 as new prior): P(2H | HH) = (1 × 2/3) / (1 × 2/3 + 1/2 × 1/3) = (2/3) / (2/3 + 1/6) = (2/3) / (5/6) = 4/5.
Sequential Bayesian updating is the same logic as one-shot Bayesian updating — the posterior from the first flip becomes the prior for the second. Strong candidates verbalize this without needing prompting.
Common Traps
Confusing P(A | B) with P(B | A)
Candidates often invert direction. P(positive | disease) is not the same as P(disease | positive). Always state explicitly what’s given and what’s being asked.
Forgetting the marginal
Some candidates compute P(A and B) and stop, forgetting to divide by P(B). Always set up the full Bayes formula, even when computation is simple.
Implicit uniform priors
If the problem doesn’t state a prior, ask. “Is the coin chosen uniformly at random?” Otherwise the problem is underspecified and your answer can’t be right.
Misreading conditioning
The boy-girl paradox shows that “at least one X” is different from “the first one is X.” Conditioning on the existence of an event vs conditioning on a specific event yields different posteriors.
Skipping the sanity check
After computing a posterior, ask: does this answer make sense? If your posterior is less than your prior despite confirming evidence, you’ve likely made a mistake. If it’s wildly different, double-check the likelihood.
Strategy for Solving Interview Problems
- State the prior explicitly: “Initially, P(A) = …”
- State the likelihoods: “Given A, the probability of observing this is …; given not-A, it’s …”
- Compute the marginal: “So the marginal probability of the observation is …”
- Compute the posterior: “Therefore the posterior is …”
- Sanity-check: “This is higher than the prior, consistent with the evidence supporting A.”
Verbalize each step. Interviewers want to see structured thinking. Even if you compute mentally, narrate.
Frequently Asked Questions
How important is conditional probability vs other probability topics?
Critical. After expected value, conditional probability is the most-tested topic at quant-trading interviews. Almost every brainteaser involves Bayesian reasoning at some level: updating from new information, conditioning on events, computing posteriors. Strong candidates spend serious prep time here. The disease-test problem and the boy-girl paradox in particular are asked extremely often, and the “correct” answer is well-known — the interviewer is testing setup and articulation, not formula recall.
Should I memorize Bayes’ theorem in symbolic form?
Memorize the structure (prior × likelihood / marginal), not the symbols. In an interview, you’ll set up problems verbally: “the prior is X, the likelihood under A is Y, under not-A is Z, so the marginal is … and the posterior is …” Symbolic memorization without intuition fails when the problem doesn’t fit a textbook template. Practicing 10–20 problems out loud builds the intuition far better than re-reading the formula.
What if the interviewer asks a problem I’ve seen before?
Be honest: “I’ve seen the disease-test problem; the answer involves base rates and the surprising 50% posterior.” Then offer to walk through the setup explicitly, or invite a variation. Pretending you haven’t heard it and stumbling on a known problem is a worse signal than acknowledging it. Many interviewers will pivot to a variation (different base rate, different sensitivity) that probes whether you understand the structure or just memorized the answer.
How do conditional-probability questions appear at trading firms vs hedge funds?
At trading firms (Optiver, SIG, Akuna, IMC) they tend to be embedded in market-making rounds: the interviewer reveals partial information, you update your fair-value estimate, the interviewer trades against you and reveals more. At hedge funds (Two Sigma, D. E. Shaw, Citadel) they tend to appear in research interviews as standalone probability problems with clean setups. Both flavors test the same skill; the framing differs.
What’s the most common mistake candidates make on these problems?
Inverting conditional direction — computing P(B | A) when the question asks P(A | B). The disease-test problem catches this directly: candidates compute “the test is 99% accurate” and report 99% as the answer, missing that the prior of disease is 1%. The fix: write down both directions explicitly. “P(positive | disease) = 0.99. P(disease | positive) is what we want.” That single discipline prevents the most common error.
See also: Breaking Into Quant Finance and Wall Street: 2026 Guide • Expected Value and Fair-Game Reasoning • Probability Brainteasers for Quant Interviews