Take-home assignments have always been controversial — pre-AI, the complaint was time investment; post-AI, the complaint is “how do you tell who actually wrote it.” Most companies have not abandoned take-homes; they have changed them. This guide covers what shifted and how to do them well in 2026.
What companies are doing differently
- Shorter assignments (2–4 hours instead of 6–8) so the time cost is more honest
- Explicit policy on AI use: required, optional, or prohibited
- Evaluation focus on design choices and write-ups, not just code
- Follow-up live discussions where you walk through your decisions
- Open-ended scope — they want to see what you ship in N hours, not whether you finish a fixed spec
The disclosure question
Most companies in 2026 take one of three positions:
- AI required: “Use whatever tools you would use on the job, including AI.” Most AI-shipping companies (Cursor, Linear, Notion) take this position.
- AI optional, disclose: “Use AI if you want, tell us how. We will read the code with that context.” Common at mid-tier companies.
- AI prohibited: “We are evaluating your unaided skill.” Less common in 2026, mostly at companies still adjusting.
Whatever the policy, follow it honestly. Mismatch between stated policy and actual practice is a fireable offense at most places.
The new evaluation criteria
Reviewers in 2026 read your submission with these questions:
- Did you understand the problem before writing code?
- Did you make defensible design choices? Are they explained?
- Did you handle edge cases? (AI-generated code often skips these)
- Did you write meaningful tests?
- Did you ship something working, not “almost done”?
- If you used AI, did you verify or just paste?
The README is now load-bearing
In 2026 the README is more important than the code. It should answer:
- What did I build?
- What design decisions did I make and why?
- What tradeoffs did I face?
- What did I cut for time?
- If AI was used: how, where, and what I verified
- What I would do next with another 4 hours
A clean codebase with a thin README often loses to a less-polished codebase with a thoughtful one.
Live walkthroughs
The follow-up is now standard. Reviewers ask:
- Walk me through how you approached this
- Why did you pick approach X over Y?
- How did the AI tooling help or get in the way?
- Show me a piece of code you are not confident about
- Write a small change live to demonstrate you understand the codebase
Junior candidates who pasted AI output without understanding fail this round visibly. Be ready to explain every line.
Time management
- Treat the stated time as honest. Going 50% over is normal; going 200% over signals you missed scope.
- Track your hours and disclose them in the README. Reviewers respect honesty.
- Cut scope before you cut quality. A working subset beats a broken full feature.
What good submissions look like
- Clean, readable code with sensible naming
- Tests for the core happy path and a few edge cases
- README that explains design decisions
- Honest disclosure of AI use
- Acknowledged tradeoffs and “what I cut for time”
- Working, deployable, with a one-line run command
What separates senior from staff
Senior submissions ship a clean implementation with thoughtful tests. Staff submissions also discuss the production-grade considerations: observability, deployability, scaling, security. Even if those are not implemented, mentioning them shows the lens.
Common failure modes
- Pasting AI output without verification — reviewers can usually tell
- Over-engineering the obvious path while leaving edge cases broken
- Missing the README entirely
- Going 4x over the time budget
- Implementing the wrong feature because you did not clarify ambiguity
- Dishonesty about AI use that becomes apparent in the live discussion
Frequently Asked Questions
Should I always disclose AI use?
If the policy says optional-but-disclose, yes. If the policy says required, you do not need to disclose every line, but mentioning the workflow is appreciated. If prohibited, do not use it.
Is it ever worth declining a take-home?
If the time ask is unreasonable (10+ hours) or the company’s reputation is poor, decline politely. Most reasonable take-homes are worth doing if you are interested.
How do I handle the case where AI gives me obviously wrong code?
Verify, fix, and document. The verification skill is the signal. A submission that catches AI errors and corrects them stands out.