How can you get a fair coin toss if someone hands you a coin that is weighted to come up heads more often than tails?
2026 Update: Von Neumann’s Fair Coin Trick — Bias-Free Sampling
Classic puzzle: You have a biased coin (P(heads) = p, unknown). How do you generate a fair 50/50 outcome?
Von Neumann’s solution: Flip twice. If HT → call it heads. If TH → call it tails. If HH or TT → repeat. Both HT and TH have probability p(1-p) — equal regardless of p!
import random
def fair_from_biased(biased_prob: float) -> str:
"""Generate fair 0/1 using a biased coin with probability p of heads."""
while True:
flip1 = random.random() < biased_prob
flip2 = random.random() float:
"""Expected number of flips per fair output."""
# P(accept) = P(HT) + P(TH) = 2p(1-p)
# E[rounds] = 1/P(accept), each round uses 2 flips
p_accept = 2 * p * (1 - p)
return 2 / p_accept
# For p=0.3: expected flips = 2 / (2*0.3*0.7) = 2/0.42 ≈ 4.76
print(f"Expected flips (p=0.3): {expected_flips_von_neumann(0.3):.2f}")
# Verify fairness empirically
def test_fairness(p, n_trials=100000):
heads = sum(1 for _ in range(n_trials) if fair_from_biased(p) == 'H')
return heads / n_trials
for p in [0.1, 0.3, 0.7, 0.9]:
fairness = test_fairness(p, 20000)
print(f"p={p}: fair output heads fraction = {fairness:.3f} (target ~0.500)")
# Mathematical proof it works:
# P(output H) = P(HT) / (P(HT) + P(TH)) = p*(1-p) / (p*(1-p) + (1-p)*p) = 0.5
print("nMath: P(output H) = p*(1-p) / [p*(1-p) + (1-p)*p] = 0.5 for all p")
Real-world applications:
- Cryptography: Hardware RNGs have bias; von Neumann debiasing is a standard step before use
- A/B testing: When assignment mechanism is suspected biased, use rejection sampling
- Load balancing: Generating uniform random server selections from a non-uniform source
- Machine learning: Rejection sampling underlies MCMC algorithms and importance sampling
Efficiency note: Von Neumann’s method is inefficient when p is near 0 or 1 (most flips are discarded). Elias (1972) and Peres (1992) algorithms extract more entropy per flip sequence. For p=0.01, von Neumann uses ~200 flips per fair bit; Peres uses ~20.
💡Strategies for Solving This Problem
The Classic Probability Puzzle
This came up in my Google interview. It's not about coding - it's about thinking probabilistically. But you can implement it, which is what they asked me to do.
The Problem
You have a biased coin (unknown probability p of heads). How do you generate a fair 50/50 result?
The Insight (von Neumann's Trick)
Flip the coin twice:
- HT → return "heads" (probability: p × (1-p))
- TH → return "tails" (probability: (1-p) × p)
- HH or TT → try again
Since p × (1-p) = (1-p) × p, the outcomes HT and TH are equally likely!
Why It Works
Even though the coin is biased, the probability of HT equals the probability of TH. By mapping HT to "heads" and TH to "tails", we get a fair result.
Expected flips: 1/(2p(1-p)). For p=0.5 (fair coin), this is 2 flips. For extreme bias (p near 0 or 1), it could be many flips.
What My Interviewer Asked
- "Explain why it works" (probability argument)
- "What if the coin lands the same both times?" (retry)
- "What's the expected number of flips?" (depends on p)
- "Implement it" (code below)