“How would you move Mount Fuji?” is the title brainteaser of the era, in the literal sense — William Poundstone used it as the title of his 2003 book on Microsoft-style hiring, and that book is the canonical reference for everything brainteaser-era. The question itself is older than the book, attributed to Microsoft in the late 1990s. Like the manhole cover question, it survived as cultural reference long after it stopped functioning as an interview filter.
The question
“How would you move Mount Fuji?”
That is the entire prompt. No specifications. No deadline. No budget. The interviewer is waiting to see how the candidate metabolizes a deliberately under-specified, deliberately absurd problem. Mount Fuji is a 12,389-foot stratovolcano with a base diameter of about 50 kilometers and a mass somewhere in the order of magnitude of 1016 kilograms. The question is unanswerable in any literal engineering sense. That is the point.
The structured-response framework
Every Mount Fuji answer that worked in the brainteaser era followed roughly the same structure. The candidate started by asking clarifying questions and only then attempted any computation.
- Where are we moving it to? Across the bay? To the next prefecture? To a different continent? The answer changes the calculation by orders of magnitude.
- How fast? Are we talking about completing the move in a year, in a decade, in a century? A slow move can use much smaller equipment.
- What do we mean by “move”? Do we have to preserve the mountain’s shape? Can we crush it into gravel and pour it as concrete somewhere else? Can we leave the ground rock and only move the loose soil and snow? Can we leave the mountain and just rename a different mountain “Fuji”?
- What is the constraint we are optimizing? Cost, time, environmental impact, ceremonial preservation?
After the clarifications, the answer typically went into Fermi estimation: how many cubic meters of rock, at what density, in what kind of trucks, over how long. The exact numbers did not matter. The interviewer wanted to see whether the candidate could break a giant problem into proportional sub-problems and estimate each one’s order of magnitude.
A worked-through estimate
Let us actually do it, just to see what a competent answer looks like.
Volume. Mount Fuji approximated as a cone with base radius 25 km and height 3.8 km has volume (1/3) × π × 25,000² × 3,800 ≈ 2.5 × 1012 m³. That is 2.5 trillion cubic meters of rock and soil.
Mass. Average rock density is about 2,500 kg/m³, so total mass ≈ 6 × 1015 kg. Six quadrillion kilograms.
Trucks. A large mining dump truck can carry about 200 tonnes (200,000 kg) per trip. So we need ≈ 3 × 1010 truck trips. Thirty billion trips.
Time. If we operate one truck continuously, with a one-hour round trip including loading and unloading, that is one trip per hour, ≈ 10,000 trips per year per truck. Thirty billion trips at 10,000 per truck-year is three million truck-years.
Fleet. If we have a fleet of one million trucks running continuously, the move takes 3 years. If we have a fleet of 10,000, the move takes 300 years.
Whether any of these numbers is exactly right is irrelevant. The point is the candidate has just shown they can decompose a problem into multiplicatively-related sub-problems, attach defensible numbers to each, and combine them coherently. That same skill is exactly what is needed to size a database, plan a project, or evaluate a feature’s impact.
The Poundstone book and the era
William Poundstone’s 2003 book How Would You Move Mount Fuji? was the moment the brainteaser era went from inside-baseball industry knowledge to general-public cultural reference. Poundstone collected the canonical Microsoft-era questions, gave their canonical answers, walked through the recruitment philosophy that produced them, and added enough cultural commentary to make the book a bestseller outside of tech.
The unintended effect was to leak every famous question to every prospective candidate at once. By 2005, the population of CS students preparing for Microsoft interviews had read the book or its summaries. Asking “how would you move Mount Fuji” in 2005 was, almost by definition, asking whether the candidate had read Poundstone. That is a different test than the one the question was originally designed to administer.
The Google parallel
Google adopted a similar brainteaser tradition through the 2000s — different questions (golf balls in a school bus, piano tuners in Chicago, why are manhole covers round) but the same underlying philosophy. Then, in 2013, Laszlo Bock told the New York Times that the brainteasers were a “complete waste of time” because their internal data showed no correlation with on-the-job performance. That public retreat is the most-cited industry mea culpa about brainteaser hiring, and it accelerated the trend that Microsoft had already started — toward structured behavioral and coding interviews and away from open-ended estimation puzzles.
What was the question actually testing?
When the question worked, it tested four things:
- Tolerance for absurdity. A candidate who immediately objected “you cannot move Mount Fuji” was failing a soft test of whether they could engage with hypotheticals.
- Clarification reflex. Asking “where to” and “by when” before computing was a mark of professional maturity.
- Decomposition. Volume → mass → trucks → time is a multiplicative chain. Candidates who could not see the chain — who tried to compute “the mass of Mount Fuji” without first computing volume and density — usually got stuck.
- Estimation comfort. Picking 2,500 kg/m³ for rock density without panicking about precision is a learned skill. Engineers who spent their careers in test-driven domains often had not practiced it.
Is the Mount Fuji question still asked in 2026?
Almost never as a literal interview question. Asking it earnestly in 2026 would be cosplay — it is the most-cited dead question in the genre, and any candidate who has read about interview prep knows the framework. Where the underlying skill survives is in product manager loops, which still use Fermi estimation routinely (“how many ride-share cars operate in Tokyo on a Friday night”), and in some staff-engineer rounds, where “estimate the storage cost of running this at 100x scale” is a back-of-envelope problem with the same shape.
The cultural memory of the question is permanent. Anyone who works in tech long enough hears it referenced, usually as shorthand for the dead brainteaser tradition. The book that named the era still gets cited in onboarding documents and “how we hire” memos when companies want to explain how their interview process is different from the 1990s Microsoft tradition.
Frequently Asked Questions
Did Microsoft really ask “how would you move Mount Fuji”?
Yes, in the late 1990s and early 2000s. It is one of the most-cited Microsoft brainteasers in retrospectives of that hiring era. By the mid-2000s it had become rare; by the 2010s it was extinct in serious technical interviews.
Is there a “right answer”?
No. The signal was the candidate’s process — clarifying questions, decomposition, estimation comfort, willingness to engage with the absurdity. Any answer that demonstrated those skills was a “good” answer. Any answer that produced a confident specific number without justifying it was usually a worse answer than a vague answer with clear reasoning.
Do PM interviews still ask Fermi estimation?
Yes. Market-sizing questions (“how big is the bicycle helmet market in Germany”) are core to PM loops, and the underlying skill is the same as Mount Fuji-era estimation. The framing is different — Fermi questions in 2026 are usually grounded in real markets that the PM is plausibly going to estimate as part of the actual job — but the structure is identical.
What killed the brainteaser era?
Empirical data. Google publicly led with Laszlo Bock’s 2013 statement that brainteasers had no predictive validity. Microsoft and other companies de-emphasized them more quietly. The combination of leaked questions (every candidate had read Poundstone) and weak signal (no correlation with performance) made the format untenable. LeetCode-style coding and structured behavioral interviews replaced it.