Should I Disclose AI Use in a Take-Home? — Decision Framework (2026)

The take-home assignment in 2026 sits in a gray zone. Most explicit instructions about AI tool use are either nonexistent or vague (“use whatever tools you would normally use”). The candidate is left to decide: do I disclose that I used AI? do I underplay it? do I ask for clarification before starting?

The answer is not one-size-fits-all. It depends on the company’s policy, the role, the kind of take-home, and the candidate’s own ethics. This piece is a framework for navigating the decision rather than a single recommendation.

The four states the take-home can be in

  • Explicitly AI-permitted: the instructions say “AI tools are allowed” or “use whatever tools you would normally use.” Disclosure is unnecessary; assume use is welcome.
  • Explicitly AI-prohibited: the instructions say “no AI tools” or “do not use ChatGPT / Copilot / etc.” Disclosure of any use would be admission of a violation. Do not use AI tools, full stop.
  • Implicitly permitted: the instructions are silent on AI tools, but the company’s culture and the role context strongly suggest AI tools are normal. (Most modern startups, AI-native companies, mid-tier tech.)
  • Genuinely ambiguous: the instructions are silent and the company’s culture gives no clear signal. (Some traditional enterprises, some quant firms, some firms in regulated industries.)

Most of the decision applies to the implicit and ambiguous cases.

The decision framework

Step 1: Read the instructions carefully

Many candidates skim and miss explicit policy. Look for terms like “AI”, “Copilot”, “ChatGPT”, “LLM”, “do this on your own”. If any of these appear, treat the take-home as explicitly classified.

Step 2: Look at the time budget

A 2-hour take-home almost always assumes AI assistance — most candidates with AI tools can produce competent work in that window. A 6-8 hour take-home with explicit deliverables is the gray zone — companies sometimes calibrated against pre-AI baselines and may be expecting unaided work even though they did not say so.

Step 3: Look at the deliverables

If the take-home asks for “the code” only, AI use is hard to detect and probably accepted. If the take-home asks for “your thought process, your design decisions, and your tradeoff reasoning” — the company is grading your thinking, and AI-generated reasoning will read as generic. Disclosure or restraint is more important here.

Step 4: Look at the role context

For a software engineering role at a modern tech company, AI tool use is normal and expected. For a research role at an AI lab, the take-home may be evaluating unaided foundational reasoning. For a quant trading firm, AI use in any take-home is likely against the spirit of the assignment even if not explicitly forbidden.

Step 5: Default to clarification when in doubt

If the instructions are silent and the role context is ambiguous, ask the recruiter. “Are AI coding assistants allowed for this take-home?” is a normal question in 2026. The recruiter will either confirm the policy or check with the hiring manager. The cost of asking is low; the cost of getting it wrong is high.

What disclosure looks like

If you used AI and the company has not explicitly addressed the question, a brief note in your submission email is the cleanest disclosure:

“I used Cursor / Claude Code as my normal development tool while working on this. I verified all output, structured the implementation according to my own design choices, and the testing and documentation are mine. Happy to discuss the workflow during the follow-up interview.”

Three things make this disclosure effective:

  • It names the tool specifically. Vague references (“I used AI assistance”) sound evasive.
  • It clarifies the relationship. The candidate directed the work; the AI was a tool. This is not “the AI did it”; it is “I did it with AI assistance.”
  • It invites discussion. Engineers who use AI tools well are happy to talk about it; engineers who use them poorly try to hide it.

What underplaying looks like

The opposite approach — using AI tools heavily and not disclosing — has both ethical and practical risks:

  • If the follow-up interview asks you to walk through the code or modify it on the spot, the candidate who did not understand what the AI produced cannot demonstrate the same depth as during the take-home.
  • Some companies use AI-detection tools or simply ask “did you use AI for this?” in the follow-up. Lying outright is a serious ethical breach and is usually detectable through follow-up depth.
  • The interviewer often can tell from the code style alone. AI-generated code has characteristic patterns; a candidate who cannot explain why those patterns are there is signaling something.

The clarification email template

If the instructions are ambiguous and you want to ask before starting:

“Quick clarification before I start: are AI coding assistants (Cursor, Claude Code, Copilot, etc.) allowed for this take-home? Want to make sure I follow your team’s expectations. Either way is fine.”

This is professional, signals you take the policy seriously, and removes the ambiguity. Recruiters typically respond within hours.

What if the policy changes mid-process

Sometimes a candidate starts a take-home assuming AI is permitted, then in the follow-up the interviewer says “we wanted you to do this without AI.” The right response: be honest about what you did, explain your reasoning at the time, and offer to redo a portion if they want unaided evidence. Defending the use (“but you didn’t say not to”) is technically correct and culturally wrong; the company is reading the interaction as a signal of judgment, and rigid defensiveness scores poorly.

The case for restraint when ambiguous

For ambiguous take-homes specifically, an underrated option is to deliberately not use AI tools beyond minimal lookups. The reasoning:

  • If the company turns out to want unaided work, you have it.
  • If the company turns out to be permissive, you have done the work the harder way and have nothing to apologize for.
  • The follow-up interview is easier when you actually wrote every line yourself. You can answer any question about any decision because you made each one consciously.

This is more conservative than necessary in many cases, but the cost-benefit is asymmetric: not using AI when you could have wastes 1-2 hours; using AI when you should not have can lose the offer.

Frequently Asked Questions

Will I be detected if I do not disclose AI use?

Often yes. AI-generated code has characteristic patterns. Follow-up interviews probe whether the candidate understands what they submitted. The candidate who cannot defend the code is detected indirectly even if not by automated tooling.

Is disclosing AI use a positive signal?

At AI-permissive companies, yes. At conservative companies, the disclosure is at best neutral. Calibrate to the company.

What about open-source dependencies and Stack Overflow?

These are universally accepted. The disclosure question is specifically about generative AI tools that produce custom code on demand. Library use, documentation lookups, and Stack Overflow are normal engineering and do not require disclosure.

What if I asked the AI for help understanding the problem but did not have it write code?

Lower-risk than letting the AI write code, and many would not consider it AI use in the relevant sense. But if the company asks “did you use AI for any part of this”, be honest: “I used Claude to clarify my understanding of one of the requirements. I wrote all the code myself.” That is normal engineering and easy to defend.

What if my submission is partially AI-generated and I do not disclose?

Risky. If discovered, the company treats it as a trust failure rather than a tool-use failure. The recommendation is straightforward: when in doubt, disclose briefly, calmly, and professionally. The downside of disclosure is small; the downside of being caught hiding it is large.

Scroll to Top