A new interview format started showing up in 2025 and has spread through 2026. The pattern: the interviewer, the candidate, and the AI tool form a three-way collaboration on a problem. Unlike a traditional pair-programming interview where the interviewer might ask the candidate “what would you do next?” and watch them code unaided, this format treats the AI as a participant in the conversation. The interviewer evaluates not just the candidate’s code, but the candidate’s negotiation between their own intuition, the interviewer’s hints, and the AI’s suggestions.
It is the most genuinely new interview format to emerge in years, and it is hard to prepare for using traditional methods. This piece covers what the format looks like, where it shows up, and how to practice for it.
The setup
The standard configuration:
- The candidate, the interviewer, and Cursor (or another AI editor) are all present.
- The problem is non-trivial — typically requiring 30-45 minutes even with collaboration.
- The interviewer occasionally suggests directions, asks clarifying questions, or proposes alternatives. They are not silent.
- The AI tool is open and accessible. The candidate may use it freely.
- The interviewer watches both the candidate’s code and the candidate’s prompts to the AI.
The defining moment in this format is when the candidate has a choice: follow the interviewer’s suggestion, follow the AI’s suggestion, or follow their own intuition. The candidate’s reasoning at that moment is what the interview is measuring.
How it differs from traditional pair programming
Traditional pair-programming interviews put the interviewer in the senior role: they ask questions, the candidate codes, the interviewer corrects. The Cursor-era format puts the candidate in the senior role: they direct the AI, they consider the interviewer’s input as one of multiple voices, and they make integration decisions. The interviewer is a peer, not a teacher.
This is a structural shift. In traditional pair-programming the candidate’s job is to absorb feedback. In Cursor-era pair programming, the candidate’s job is to weigh competing inputs and decide. Candidates who passively defer to the interviewer score poorly; candidates who passively defer to the AI score poorly. The candidate must negotiate.
Where the format shows up
- Cursor (Anysphere) — internal interviewers use this format extensively, both because the company is built on Cursor and because it makes sense for evaluating candidates who will use Cursor full-time post-hire.
- Replit — similar dynamic.
- Some Anthropic teams — emerging, especially for Claude Code-adjacent roles.
- Some Vercel teams — particularly for AI product engineering roles.
- YC startups (2024+ batches) — increasingly common; the format mirrors how the engineering team actually works.
It is much rarer at FAANG, quant firms, and traditional enterprises. The format requires interviewer training and a level of comfort with AI tools that most established companies do not yet have.
Three sub-formats within the format
The “agree, push back, or override” format
The interviewer makes a suggestion (“what about using a hash map here?”). The AI suggests something (“you could use a binary search tree”). The candidate considers both and either: agrees, pushes back with reasoning, or overrides both with their own approach. The interviewer is grading the quality of reasoning, not which option is chosen.
The “find the bug” format
The interviewer puts up code that has a subtle bug. The AI is available, the interviewer is available, and the candidate must find and fix the bug. The interviewer watches whether the candidate uses the AI to scan for bugs (efficient) vs uses the AI to write more code without diagnosing (failure mode). The interviewer may also drop hints to test whether the candidate can integrate hints with their own reasoning and the AI’s analysis.
The “open-ended build” format
The interviewer says “let’s build X over the next 45 minutes” without much further specification. The candidate scopes, decomposes, prompts the AI, evaluates output, integrates. The interviewer occasionally checks in but mostly observes. The format tests whether the candidate can sustain productive forward motion using all three voices.
What scores well
- Active disagreement when warranted. If the AI suggests a poor approach, the candidate articulates why and goes a different direction. If the interviewer suggests something the candidate disagrees with, they push back politely with reasoning.
- Integration of hints. When the interviewer drops a hint, the strong candidate integrates it with what the AI is doing rather than treating it as a separate channel.
- Verbalized weighing. “The interviewer suggested X, the AI suggested Y, but I think Z because of [reason].” The interviewer can hear the negotiation happen.
- Genuine fluency with the AI tool. Knowing the AI’s keyboard shortcuts, knowing how to prompt for specific kinds of help, knowing when the AI is reliable vs when it is not.
What scores poorly
- Always defaulting to the interviewer. The candidate who does whatever the interviewer suggests is filtering for compliance, not engineering judgment. The interviewer is testing whether you can hold a position.
- Always defaulting to the AI. Similar failure mode in the other direction. Engineers who defer fully to AI tools are not engineering.
- Inability to articulate the trade-off. If the candidate cannot explain why they chose one path over another, the interviewer cannot tell whether the choice was reasoned or random.
- Treating the AI as adversarial. Hostile prompting and suspicious verification of every AI output is wasteful and signals discomfort with the tool.
- Silence. The format requires audible negotiation. Candidates who code silently lose half the signal.
How to practice
The format is hard to practice solo because it requires a third voice (the interviewer’s). Two practice methods that work:
- Asynchronous “interviewer hints” exercises. Pick a moderately complex coding problem. Set a timer. Code it with AI assistance. Halfway through, write down a “hint” you would give yourself if you were the interviewer (“have you considered using a heap?”). Then continue, integrating the hint. The exercise builds the muscle of weighing multiple inputs.
- Real pair programming with a friend who uses AI tools. Take turns being the candidate and the interviewer. The interviewer drops occasional hints; the candidate must negotiate. Most candidates are surprised at how hard this is.
Frequently Asked Questions
Is this format common in 2026?
It is common at AI-native startups and at some AI labs. It is rare at FAANG and traditional enterprises. The format is growing but still not the default at most companies.
How is the rubric different from a regular AI-collaborative coding round?
The Cursor-era format adds the interviewer as an active participant. The candidate must negotiate three voices, not two. Verbalization of the negotiation matters more.
Can I use Cursor specifically, or any AI editor?
Generally any AI editor. Use what you are most fluent with. Many candidates choose Cursor because the format originated there and the interviewer may be familiar with its specific affordances.
What if I disagree with the interviewer’s suggestion?
Push back with reasoning. The interviewer is testing whether you have the confidence to hold a position; deferring fully is a worse signal than respectful disagreement.
How do I know if a company uses this format?
Ask the recruiter explicitly: “Will the technical rounds involve any kind of three-way collaboration with an AI tool?” If they do not know, it is unlikely to be the format. AI-native startups will usually mention it explicitly.