“AI is allowed during this interview” is a sentence more candidates hear in 2026 than in 2022 ever imagined. The format varies — some companies say “use AI freely”, others “use AI as you would on the job, narrate your decisions.” The interviewer’s eyes are different than in a no-AI interview. This guide is for the candidate trying to perform well in this format.
The companies running this format
- AI-tooling companies (Cursor, Claude, Windsurf) — naturally
- Engineering-forward SaaS (Linear, Vercel, Notion) — increasingly
- Some BigTech roles, especially those targeting AI fluency
- YC-stage startups — often informal but allowed
The shape of the interview
- 60–90 minutes, screen-shared
- A real-feeling problem — build something or solve a non-trivial bug
- Open editor with AI assistant of choice (Cursor, Copilot, etc.)
- Interviewer observes, asks questions throughout
- Less LeetCode, more “implement this small feature”
What the interviewer is grading
- Can you decompose the problem clearly before reaching for AI?
- Do you know what you want before you prompt?
- Can you read the AI’s output critically?
- Do you verify behavior, not just compile?
- Are you efficient at iterating prompts?
- Do you handle the AI being wrong without panicking?
What the interviewer is NOT grading
- Whether you remember syntax (you can ask the AI)
- Whether you can write code from scratch faster than AI can
- Whether you have memorized algorithm details
The bar shifts toward judgment, not recall.
The first 5 minutes
- Restate the problem in your words
- Ask clarifying questions about scope, edge cases, performance
- Sketch the approach — what types, what functions, what data flow
- Decide on a project structure
- Tell the interviewer your plan before opening the editor
This step is where many candidates lose the interview before they have written code. Skipping it makes the rest look reactive.
The middle: shipping
- Implement the plan; use AI for boilerplate, scaffolding, tedious parts
- Read each AI suggestion before accepting
- If AI produces wrong output, do not just re-prompt — diagnose
- Run the code; verify behavior
- Add tests proactively (most expected; signals seniority)
- Talk through what you are doing
Common pitfalls
- Pasting an AI answer without reading: the most-watched failure mode
- Re-prompting until something works without understanding why: signals lack of grounding
- Long silences: talk through your thinking; the interviewer cannot read your screen as fast as you
- Stuck on AI being wrong: diagnose; do not blame the tool
- Skipping verification: “compiled OK, ship it” is a junior signal
The verification habits to demonstrate
- Run the code with sample input
- Run a quick sanity test — is the output what you expected?
- Add unit tests, even small ones
- Discuss edge cases the AI missed
The narration
Talk while you work:
- “I am asking AI for a function that does X; I want to verify the boundary handling”
- “That suggestion is close but does not handle the empty case; let me adjust”
- “I am going to write the test first to make sure I understand the requirement”
- “I am skipping the AI here because the logic is small enough I want full control”
This narration is the new “show your work.”
Specific tactics that help
- Have your AI tool warmed up — do not waste interview time on setup
- Use the IDE’s diff view to compare AI suggestions to your intended change
- Maintain a small mental checklist: clarify, decompose, implement, verify
- If you finish early, ask “want me to extend this to handle X?”
What gets you the offer
- Calibrated trust in AI output
- Verification habits
- Articulate decomposition before coding
- Pragmatic prompt iteration
- Clear narration of decisions
- Honest “this is not my strongest area” when relevant
What gets you rejected
- Pasting and shipping without checking
- Long silences
- Confidently wrong narration (“the AI got it right” when it did not)
- Treating AI like an oracle rather than a peer
- No tests, no verification
The “AI is forbidden” alternative
Some companies still forbid AI in live coding. If you are interviewing at one:
- Sharpen your raw coding skills
- Practice without AI for 2 weeks before the interview
- Communicate well about decomposition
- The same clarify → plan → implement → verify pattern still applies
What separates senior from staff in AI-allowed interviews
Junior with AI: ships features faster than they understand. Senior with AI: uses AI to compress tedious work but holds the design and verification themselves. Staff with AI: directs AI through complex multi-step tasks while monitoring quality, narrating tradeoffs, and demonstrating taste at every decision. The senior signal is judgment in human-AI collaboration.
Frequently Asked Questions
Should I look up things via AI during the interview?
Yes — companies running this format expect it. Asking AI “what is the standard library for X in Python” is fine. Asking “solve this whole problem” without engaging is not.
What if the AI gives me a perfect answer immediately?
Verify. Run it. Add a test. Discuss the edge cases. The fast path increases the bar for verification, not lowers it.
How do I prepare?
Practice on real problems with your AI assistant the way you will in the interview. The narrative habit and the verification habit are the things to build.