Sakana AI is a Tokyo-based AI research lab — co-founded by David Ha and Llion Jones (one of the original Transformer authors). Distinguishes itself with nature-inspired ML approaches (evolutionary optimization, model merging). Series B in 2024 with backing from NEA and others. The interview emphasizes deep ML research, novel architectures, and the unique cross-cultural research environment.
Process
Recruiter screen → 60-minute coding (Python with PyTorch fluency) → onsite virtual: 2 coding/ML, 1 ML system design or research deep-dive, 1 paper discussion, 1 behavioral. Cycle: 4–6 weeks.
What they actually ask
- Design a model-merging pipeline (e.g., evolutionary optimization for merge weights)
- Design a distributed training setup for novel architectures
- Explain a paper you have read recently and what surprised you
- Coding: medium-hard DSA, often ML-flavored
- Behavioral: ownership, taste, cross-cultural collaboration
Levels and comp (2026)
- SE: ¥18M–¥28M total in Tokyo (cash + equity); $200K–$280K equivalent for non-Japan offers
- Senior SE / Research Eng: ¥28M–¥45M total; $280K–$420K US-equivalent
- Staff / Senior Researcher: ¥45M–¥70M+ total; $450K–$700K+ US-equivalent
Prep priorities
- Be fluent in Python and PyTorch; reading research papers should feel routine
- Understand evolutionary optimization, model merging, and architecture search
- Brush up on Transformer foundations and current trends
Frequently Asked Questions
Is Sakana remote-friendly?
Hub in Tokyo. Some senior+ remote within Japan / APAC. US-based hires possible at higher levels.
How does Sakana compare to other AI labs?
Sakana stands out for research originality and Japanese / English bilingual culture. Smaller than US frontier labs; more research-focused than product. Comp lower than US frontier-lab top of band but very strong for Japan-based ML roles.
What is the engineering culture?
Research-engineering blended; calm; creativity-prized. Strong taste for unconventional approaches.