Five rational pirates capture 100 gold coins. They have to divide the loot using a strict procedure. The most senior pirate proposes a division. All five pirates vote. If at least half the pirates vote yes (in a tie, the proposer’s vote breaks the tie), the proposal passes. If the proposal fails, the proposer is thrown overboard, and the next-most-senior pirate proposes. Pirates are perfectly rational, perfectly self-interested, and would prefer to throw a colleague overboard given equal coin payouts.
What does the most senior pirate propose, and what is the outcome?
This is the most elegant game theory puzzle in the interview canon. Quant firms ask it. Senior tech interviewers ask it as a “let’s think through a hard reasoning problem” warmup. Game theory textbooks include it. The reason it is famous is that the answer is deeply counter-intuitive — most candidates assume the senior pirate either gets nothing or has to bribe nearly everyone — and the path to the right answer demonstrates a powerful general technique: backward induction.
Setting up the problem
Number the pirates 5 through 1 by seniority. Pirate 5 is most senior and proposes first. The voting rule is “at least half” with the proposer’s vote counting as a tiebreaker. With five pirates, a proposal needs at least 3 of 5 votes (50% rounded up, or counting the proposer’s tiebreaker vote, equivalently). Pirates have a strict ordering of preferences:
- Survive.
- Maximize coins.
- Among indifference, prefer to see another pirate die.
That third preference is the key. It means a pirate will vote no on a proposal that gives them the same payoff as the next round’s proposal would, because in the next round another pirate dies.
Backward induction
The technique to solve this is to start from the last possible scenario and work backward. We figure out what each pirate does in each possible “few pirates left” scenario, then use that to figure out what happens with one more pirate, and so on, until we reach the original five-pirate problem.
Case: 1 pirate left (pirate 1). Pirate 1 proposes (1, 0, 0, 0, 100): they take all 100 coins. They vote yes; majority of 1 is 1; passes trivially. Outcome: pirate 1 gets 100.
Case: 2 pirates left (pirate 2 proposes). Pirate 2 proposes any division and votes yes themselves. With two pirates, half is 1 vote, and the proposer’s tiebreaker counts. So pirate 2 needs only their own vote. Pirate 2 takes all 100 coins. Pirate 1 votes no but is outvoted. Outcome: pirate 2 gets 100, pirate 1 gets 0.
Case: 3 pirates left (pirate 3 proposes). Pirate 3 needs 2 of 3 votes. They will vote yes themselves; they need one more. Pirate 1 knows that if pirate 3’s proposal fails, pirate 2 will give them 0 in the next round. So pirate 1 will accept any positive offer. Pirate 3 offers pirate 1 just 1 coin and keeps 99 for themselves. Pirate 2 votes no (they would rather pirate 3 die so they can keep all 100), pirate 1 votes yes (1 > 0), pirate 3 votes yes. Two yeses, passes. Outcome: pirate 3 gets 99, pirate 2 gets 0, pirate 1 gets 1.
Case: 4 pirates left (pirate 4 proposes). Pirate 4 needs 2 of 4 votes (with tiebreaker). They have their own vote; they need one more. Look at the 3-pirate scenario for each other pirate’s “next round” payoff: pirate 3 would get 99, pirate 2 would get 0, pirate 1 would get 1. So pirate 2’s “do nothing and let me die” payoff is 0 — they will accept any positive offer from pirate 4. Pirate 4 offers pirate 2 a single coin, keeps 99, gives pirate 3 nothing (they cannot be cheaply bribed; they would get 99 in the next round) and gives pirate 1 nothing (1 coin would be required, but pirate 2 is cheaper). Pirate 4’s vote plus pirate 2’s vote = 2 of 4, passes with the tiebreaker. Outcome: pirate 4 gets 99, pirate 3 gets 0, pirate 2 gets 1, pirate 1 gets 0.
Case: 5 pirates left (pirate 5 proposes). Pirate 5 needs 3 of 5 votes. They have their own vote; they need two more. Look at the 4-pirate scenario: if pirate 5’s proposal fails, pirate 4 would get 99, pirate 3 would get 0, pirate 2 would get 1, pirate 1 would get 0. So pirate 5’s cheapest bribes are to pirate 3 and pirate 1, both of whom would get 0 in the next round. Bribe each with just 1 coin. Total bribery cost: 2 coins. Pirate 5 keeps 98.
Final answer: Pirate 5 proposes (98, 0, 1, 0, 1) — taking 98 themselves, giving 1 each to pirates 3 and 1, nothing to pirates 4 and 2. Pirates 5, 3, and 1 vote yes (3 of 5), passes. Pirate 5 gets 98 coins, pirate 3 gets 1, pirate 1 gets 1, others get 0.
Why this answer is famous
The answer surprises almost everyone on first hearing. The intuition that says “the senior pirate has to bribe a lot to stay alive” turns out to be wrong: with backward induction, the senior pirate keeps 98% of the loot and bribes only two of the four others, with just a single coin each. The result is a clean illustration of how rational actors with full information make seemingly unfair outcomes inevitable, because every pirate’s best response is determined by what happens in every counterfactual branch.
The pirates puzzle is famous specifically because it has a clean, counter-intuitive, fully-determined answer. Most game theory problems either have multiple equilibria or require strong assumptions to get to a single answer. The pirates problem has exactly one rational outcome under standard assumptions, and that outcome is an extreme corner of the payoff space. That combination — one answer, deeply non-obvious — is what makes a puzzle famous.
Variations that change the answer
The pirates puzzle has many variants, and the answer changes dramatically as you adjust the parameters:
- Different vote thresholds. If the threshold is “strict majority” (more than half) without a tiebreaker, pirate 5 needs 3 of 5 votes outright and the math shifts. The senior pirate’s share goes down.
- More pirates than coins. If 200 pirates are dividing 100 coins, eventually the senior pirate cannot afford enough bribes to pass and has to settle for survival without coins. The exact threshold and the staircase of “who survives” becomes a famous extension problem.
- Pirates value not dying very strongly. If pirates are extremely risk-averse, they may accept worse offers to avoid the scenario where they have to propose. This changes the equilibrium.
- Asymmetric information. If pirates do not know the seniority ordering of others, the cleanly-rational solution falls apart and the answer becomes much more complex.
What the question tests
Pirates is famous in interviews because it tests a generalizable skill — backward induction — that shows up in real work all the time. Pricing American options uses backward induction. Bargaining theory uses backward induction. Sequential auctions use backward induction. A candidate who can solve the pirates problem cleanly has demonstrated they can navigate a multi-stage game by reasoning about the last stage first, and that skill transfers directly to many quant and trading scenarios.
The signal layers in a good answer:
- Recognize the technique. The candidate names “backward induction” or describes its essence within 30 seconds. A candidate who tries to reason forward from pirate 5’s perspective gets stuck immediately.
- Execute the cases cleanly. Walk through 1-pirate, 2-pirate, 3-pirate, etc. without confusion or arithmetic errors.
- Handle the tiebreaker correctly. The “proposer’s tiebreaker” rule is easy to misread. A candidate who confuses “at least half” with “more than half” gets a different answer.
- Articulate the surprise. A polished candidate notes that the pirate captain ends up with 98% of the gold, and explains why this is the equilibrium even though it looks unfair.
Is the pirates question still asked in 2026?
Yes, regularly at quant interviews and sometimes at staff-engineer interviews where the interviewer wants to test reasoning under multi-stage conditions. The puzzle has not lost its punch despite being well-known, because most candidates have only heard about it casually and have not actually walked through the backward-induction cleanly. A polished, fully-reasoned answer in two minutes is still a strong signal even when the interviewer suspects the candidate has seen the problem before.
In tech specifically, the pure pirates question is rare, but the general technique — “let’s reason backwards from the final state” — appears in dynamic programming problems and in some system design problems where capacity planning requires reasoning about end-of-life states first.
Frequently Asked Questions
What is the pirate captain’s actual share?
98 of 100 coins. They give 1 each to pirates 3 and 1 to secure their votes, and keep the rest.
Why does pirate 4 get nothing?
Because in the next round (without pirate 5), pirate 4 would propose and take 99 for themselves. Their counterfactual payoff is 99, which the captain cannot afford to match cheaply, so the captain skips pirate 4 entirely and bribes pirates 3 and 1, who would otherwise get 0.
Does the answer change if pirates value coins over revenge?
The third-tier preference (preferring to see another pirate die at equal payoffs) is what forces pirates to vote no on indifferent proposals. Without that tiebreaker, the answer becomes degenerate — any proposal with the right number of yeses passes, including proposals that give the captain everything. The standard formulation includes the revenge preference precisely to make the equilibrium unique.
What happens with 200 pirates and 100 coins?
Once the number of pirates exceeds twice the number of coins, the senior pirate cannot bribe enough others to secure votes, and starts getting thrown overboard. The exact threshold is a famous extension — pirates 1 through 199 (under various phasings) survive, but pirates 200, 201, etc., depending on parity, are doomed. This variant was popular as a follow-up question in the 2000s.
Where can I read more on this kind of problem?
The pirates problem appears in Ian Stewart’s Mathematical Recreations column from the 1990s and in many game theory textbooks. For interview prep specifically, Mark Joshi’s Quant Job Interview Questions and Answers covers the backward induction family thoroughly.