Tech interviewing has gone through three distinct eras over the last 30 years, each defined by the format that dominated and the kind of candidate that format selected for. The arc starts with Microsoft brainteasers in the 1990s, runs through the Google whiteboard era of the 2000s and the LeetCode standardization of the 2010s, and arrives in 2026 at a fragmented landscape where coding remains central but the format choices vary widely — take-homes at some companies, live debugging at others, AI-assisted interviews appearing for the first time. Understanding the arc is useful both for preparation in 2026 and for forecasting what might come next.
Era 1: brainteasers (1990s–early 2000s)
Microsoft is the prototype. In the 1990s, the company adopted a hiring philosophy that placed enormous weight on raw cognitive ability. The premise was that smart candidates could learn any specific technology, but candidates who could not reason about novel problems would always be limited by what they already knew. The brainteaser was the operational test for “can this person attack a problem they have not seen before”.
The canonical questions of this era — manhole covers, Mount Fuji, golf balls in a school bus, piano tuners in Chicago, the burning ropes, the bridge crossing — were collected in William Poundstone’s 2003 book How Would You Move Mount Fuji?. The book’s success was the moment the era leaked into general awareness. Within a few years of the book, every CS student preparing for Microsoft or Google had heard of every famous brainteaser, and the questions stopped functioning as filters for raw cognitive ability — they became filters for “have you read the book”.
The era ended for two reasons. First, leakage made the questions useless. Second, when companies finally measured whether brainteaser performance predicted job performance, the answer was negative. Google’s Laszlo Bock made this public in 2013 with his “complete waste of time” interview, but the underlying data had been accumulating for years.
Era 2: whiteboard coding (2000s–mid 2010s)
The whiteboard coding era is the one most senior engineers in 2026 grew up in. The canonical question type was “write code on a whiteboard, in a real programming language, for a specific algorithmic problem”. The questions came from the academic algorithms canon — sorting, searching, graph traversal, dynamic programming, classic data structures — adapted for the time pressure of an interview slot.
Google was the prototype. The Google interview format of the late 2000s — five 45-minute coding rounds, each with a different interviewer, each starting with a small algorithmic problem and progressing to a harder variation — became the standard that Microsoft, Amazon, Meta, and most large tech companies adopted within a few years. The shared format created the standardization that Cracking the Coding Interview codified in 2008 and that subsequent prep books refined.
The whiteboard era ended for several reasons. First, Max Howell‘s June 2015 tweet about his Google rejection over invert-a-binary-tree crystallized a long-running candidate complaint about the format’s arbitrariness. Second, the artificial constraint of writing perfect syntax under live pressure was widely seen as testing performance under pressure rather than engineering ability. Third, collaborative editing tools (CoderPad, CodeSignal, Google Docs) became good enough that the whiteboard’s purpose — preventing candidates from just typing into an IDE — could be served better with live screen-share and a simple text editor.
Era 3: LeetCode standardization (mid 2010s–early 2020s)
LeetCode — the platform — launched in 2015 and within a few years became the dominant prep route for tech interviews. The platform’s success was self-reinforcing: more candidates prepped on LeetCode, so interviewers started drawing problems from LeetCode, so prep on LeetCode became more useful, so more candidates prepped there. By 2020, the typical FAANG interview problem was either a LeetCode problem or a near-cousin.
This era saw the rise of community-curated lists — Blind 75, Neetcode 150, Top Interview 150 — which distilled the platform’s thousands of problems into manageable subsets. Candidates who completed Blind 75 were prepared for most phone screens. Candidates who completed Neetcode 150 were prepared for most onsites. The sheer regularity of the format meant interviews became predictable in a way they had not been since the brainteaser era — and like the brainteaser era, this predictability eventually produced its own pushback.
The LeetCode era is still the dominant tech interview format in 2026. Most coding rounds at FAANG and tier-2 tech firms still consist of one or two LeetCode-style problems, and most candidates still prepare by grinding LeetCode. What has changed is the perception that LeetCode is sufficient: a generation of candidates is now visibly questioning whether the format actually predicts engineering ability, and a generation of senior engineers is openly skeptical that LeetCode performance correlates with on-the-job impact.
Era 4: fragmentation (2020s)
Beginning around 2020, interview formats began fragmenting. Different companies adopted different non-LeetCode formats, often reactively and in response to specific perceived weaknesses of LeetCode. Several of the alternatives:
- Take-home assignments. Multi-hour or multi-day projects that the candidate completes on their own time and submits. Stripe was an early adopter; many startups have followed. The take-home tests a different bundle of skills (planning, code organization, polish) but is unpopular with experienced candidates because of the time burden.
- Live debugging rounds. The candidate is given a real codebase with a planted bug or feature request and asked to fix it during the interview. Tests practical engineering skill in a way LeetCode does not. Adopted by some teams at Atlassian, GitHub, GitLab, and others.
- Pair programming sessions. A senior engineer and the candidate work together on a real problem, with the senior engineer evaluating not just correctness but collaboration and communication.
- Trial days or paid trial weeks. The candidate works on the team for a day or a week, getting paid, while both sides evaluate fit. Used by some YC startups and a handful of larger companies.
- System design first. Some senior+ interviews now lead with system design rather than coding, on the theory that for senior roles the architectural thinking is more job-relevant than the algorithmic puzzles.
None of these has displaced LeetCode-style coding at FAANG; the LeetCode round is still the dominant format. But the alternatives have made tech interviewing as a category more diverse than it has been since the 1990s, and a candidate preparing in 2026 has to be ready for whichever format the specific company uses.
Era 5: AI-assisted hiring (emerging, 2024–present)
The most recent chapter is just beginning. Generative AI tools — particularly the modern code assistants — have changed what candidates can do alone, what interviewers can detect, and what the format of “writing code under interview pressure” should test for. Several emerging trends:
- AI-permitted interviews. Some companies (Anthropic, some teams at Google, some AI labs) explicitly allow candidates to use AI tools during coding interviews, on the theory that the actual job involves using these tools and the interview should reflect that. The signal shifts from “can the candidate produce the code” to “can the candidate productively direct the AI to produce correct code, evaluate its output, and integrate it cleanly”.
- AI-detected interviews. Other companies have invested in detection tools that flag suspected AI assistance and treat it as a violation. The detection arms race is unstable and likely to be temporary; companies will eventually settle on either “permit and test the directing skill” or “redesign to make AI assistance unhelpful” rather than “detect and forbid”.
- Format pressure on take-homes. Take-home assignments have become harder to use because candidates can complete them with AI assistance. Some teams have abandoned take-homes entirely; others have moved to time-limited live evaluations of submitted take-homes.
- Behavioral and system design durability. The interview formats that AI cannot directly automate — behavioral storytelling, structured system design, communication-heavy collaborative rounds — are gaining relative weight in many loops.
What this era will stabilize into is uncertain in 2026. The most likely outcome is that coding interviews evolve to test “directing AI” as a skill, that system design and behavioral rounds gain weight, and that the LeetCode-grind preparation route loses some of its dominance. The full shape of the next era is not yet clear.
What the arc tells us about the future
Three patterns repeat across the eras. Each one started with a format that worked in a specific moment, became dominant, leaked through prep books and online communities, and lost predictive power as preparation caught up. Each one was eventually replaced by a format that addressed the leakage — either by adding artificial constraints (whiteboards) or by changing the format substantially (take-homes, live debug). And each transition was driven less by deliberate industry coordination than by individual companies experimenting and the successful experiments spreading by word of mouth.
The likely fate of LeetCode-style coding is similar: it will not disappear, but its relative weight will decrease as prep saturation makes it less informative and as new formats — AI-collaborative, system-design-first, take-home — develop their own communities of best practice. By 2030, the typical tech interview will likely look meaningfully different from the typical 2020 tech interview, in the same way the 2020 interview looks different from the 2010 interview.
What to prepare in 2026
Three things, regardless of which format the specific company uses:
- Algorithmic fluency. LeetCode-tier coding is still the dominant format and will remain so for the rest of the 2020s. Blind 75 plus targeted topic depth is the bar.
- System design depth. Senior+ interviews increasingly lead with system design. Alex Xu’s books are the canonical reference; supplement with Designing Data-Intensive Applications.
- Behavioral framing. The behavioral round is the one format that has been resilient across all five eras and will likely remain so. The STAR framework, the canonical question categories, and the company-specific frameworks (Amazon LPs, Google Googleyness) are stable.
What to monitor: how AI-assisted interviewing develops at major companies, whether take-homes survive or fade, and whether any major company makes a public retreat from LeetCode the way Google did from brainteasers in 2013. Those are the leading indicators of the next era.
Frequently Asked Questions
Are brainteasers really gone for good?
In tech, yes — they have been essentially extinct for over a decade. Wall Street and quant interviews still use probability puzzles, but those are job-relevant in a way the manhole cover question never was for software engineers.
Will LeetCode-style coding go away?
Probably not entirely, but its dominance is likely to fade as new formats develop and as AI-assistance changes what coding-under-pressure tests. The next decade will likely see LeetCode reduced from “the interview” to “one component of the interview”.
What is the most durable interview format?
The behavioral round. The questions and frameworks have been stable since the early 2000s and are likely to remain stable. If you are preparing for tech interviews and only have time to nail one format perfectly, the behavioral round is the highest-leverage choice.
How should I prepare for AI-assisted interviews?
Practice using AI tools collaboratively on real engineering tasks. The skill is not “can I write the code” but “can I direct the tool to write correct code, evaluate the output critically, and integrate it cleanly”. This is the actual job in 2026 for most engineers; preparing for it is preparation for both the interview and the work.
Is the arc going to keep accelerating?
Probably. The pace of format change has accelerated each decade since the 1990s, partly because the industry is larger and more diverse, and partly because online communities make new formats spread faster. The next major format shift is likely within five to seven years.