You are presented with two indistinguishable envelopes. You are told that one contains twice as much money as the other. You pick one and are about to open it when the host offers you the chance to switch to the other envelope. Should you switch?
This is the two envelopes paradox, one of the most beloved puzzles in the probability canon. The “naive” expected-value calculation appears to prove that switching is always better — even before opening the first envelope, which seems absurd by symmetry. The interview signal is whether the candidate can identify the bug in the calculation, articulate where the reasoning goes wrong, and explain what the correct framing produces.
The naive argument
Let X be the amount in your envelope. The other envelope contains either 2X or X/2, each “with probability 1/2” by symmetry. Your expected gain from switching is:
E[other] = 0.5 × 2X + 0.5 × (X/2) = X + X/4 = 1.25X
So switching gives you 1.25X in expectation, an increase of 0.25X. Always switch.
By the same logic, if you applied the argument to the other envelope after switching, you would conclude you should switch back. Each switch promises a 25% gain — an obvious contradiction. Something is wrong, but where?
The bug: the prior
The fallacy is in the assumption that, given X in your envelope, the other envelope contains 2X or X/2 “each with probability 1/2”. This is true only if you have no information about the underlying distribution from which the envelope amounts were drawn — but in any actual setup, there is some prior distribution, and conditional on observing your envelope’s amount, the relative probabilities of the other envelope being 2X or X/2 are not 1/2 each.
To see this concretely, suppose the host puts $10 and $20 in the envelopes. You pick one. If you observe $10, the other is definitely $20 (probability 1, not 1/2). If you observe $20, the other is definitely $10 (probability 1, not 1/2). The naive calculation, which treats the two outcomes as equally likely regardless of what you observed, ignores this conditioning.
The right way to compute the expected value of the other envelope is to use Bayes’ rule given a specific prior on what the host might have placed. Whatever the prior, the conditional probabilities are not symmetric in X — and once you compute them correctly, the “always switch” conclusion goes away.
What does the correct calculation give?
Two cases:
- You have not opened your envelope. Without observing
X, your expected value is just the average of the two envelope amounts, regardless of which envelope you have. Switching does nothing — by symmetry, both choices are identical. - You have opened your envelope and observed amount
X. Now the question is whether the conditional expectation of switching, givenX, exceedsX. The answer depends entirely on the prior distribution. For some priors switching is better given certain observed amounts; for other priors it is worse. There is no universal “always switch” rule, because the universal claim was based on a missing prior.
The bounded vs unbounded distinction
An interesting variant: what if the prior is unbounded? Suppose the host might have put any amount in the envelopes (no upper bound). Then for very large observed X, switching might still be beneficial in expectation, because there is more “room above” than “room below” in some sense.
This is where the puzzle becomes genuinely subtle. For any proper (normalized) prior distribution, the correct posterior calculation gives a finite expected value and removes the paradox. For an improper (uniform-over-all-positive-reals) prior, the math gets pathological: the expected value of either envelope is infinite, and the comparison “is switching better?” becomes undefined. The unbounded case is not a real-world setup; it is a mathematical edge case where the puzzle’s setup is not well-defined.
Most interview-friendly formulations sidestep the unbounded case by specifying that the host uses some bounded prior. The paradox dissolves in those cases as soon as the prior is properly accounted for.
What interviewers test
The two envelopes paradox tests three things:
- Recognition of the prior issue. The candidate identifies that the “1/2 and 1/2” assumption hides a missing distribution.
- Bayesian conditioning. The candidate articulates that the correct calculation is conditional on the observed amount and the prior over envelope contents.
- Symmetry argument. The candidate notes that without observing the envelope, switching cannot have any effect by symmetry — a sanity check that disproves the naive argument before computing anything.
Quant firms like this puzzle because it is a common error pattern in real probability work — applying a prior conditioning step incorrectly, treating “I don’t know” as “uniform” in a way that changes the answer. Catching this kind of error is exactly the skill a quant trader needs when reasoning about market data and unknown distributions.
Variations interviewers ask
- Open one envelope, then decide. Now the answer depends on the prior. If the host’s prior favors small amounts, switching after seeing a large amount is bad; if the prior favors large amounts, switching after seeing a small amount is bad.
- Multiple envelopes. N envelopes with progressively larger amounts. Same paradox, harder math.
- Asymmetric ratios. One envelope contains 3x the other, not 2x. The paradox structure is the same; the numbers shift.
- “What if the host is adversarial?” Now the prior conditioning becomes game-theoretic, and the optimal strategy depends on what the host is optimizing.
The deeper lesson
The puzzle is a textbook example of how careless probabilistic reasoning produces paradoxes that resolve only when the missing assumptions are made explicit. “I have no information about X” is not the same as “X is uniformly distributed”. Treating the two as equivalent is the source of many real-world reasoning errors — in trading, in machine learning, in scientific inference. The two envelopes paradox is the cleanest one-paragraph illustration of the trap.
A polished candidate, after solving the puzzle, points to this connection. “This is why we always have to be explicit about priors.” That observation, made unprompted, signals deep probabilistic literacy.
Is it asked in 2026?
Yes, regularly at quant interviews. Jane Street, Citadel, Two Sigma, and similar firms use the two envelopes paradox as an entry-level filter for whether the candidate can spot a missing-prior fallacy. The standard follow-ups (open one envelope, multiple envelopes, asymmetric ratios) extend the question into the candidate’s depth.
In tech interviews, the puzzle is rare. Some statistics-heavy ML or data-science roles include it as a Bayesian-reasoning sanity check, but it is far more common in finance.
Frequently Asked Questions
Should I always switch?
No. The “always switch” argument is the bug. Without a prior on what the host might have placed, switching has no expected gain by symmetry.
What if I open the envelope?
Whether switching is better depends on the prior distribution over envelope contents. Without specifying the prior, the question is not well-defined.
Why does the naive calculation fail?
It treats “given X in my envelope, the other has 2X or X/2 with probability 1/2 each” as a symmetric conditional, which it is not. The conditional probabilities depend on the prior over what the host placed, which the naive argument ignores.
Is there a “right answer” to the paradox?
The right answer is to insist on a specified prior. With a specified prior, the calculation is straightforward and the paradox vanishes. The paradox exists only when the question is framed without acknowledging the missing prior.
How is this related to the Monty Hall problem?
Both are conditioning paradoxes. Monty Hall surprises people because they fail to update on the host’s information. Two envelopes surprises people because they fail to update on the prior. Different surfaces, related underlying error.