Rust is growing rapidly in systems programming, WebAssembly, and high-performance services. Companies like Cloudflare, Discord, AWS, and Dropbox use Rust in production. This guide covers the Rust interview questions you will encounter at these companies.
Ownership and Borrowing
// Rust ownership rules:
// 1. Each value has exactly one owner
// 2. When the owner goes out of scope, the value is dropped
// 3. You can have either ONE mutable reference OR multiple immutable references, never both
fn ownership_basics() {
let s1 = String::from("hello");
let s2 = s1; // Move: s1 is no longer valid
// println!("{}", s1); // Compile error: value moved
let s3 = String::from("world");
let s4 = s3.clone(); // Deep copy: both valid
println!("{} {}", s3, s4);
// Stack types (Copy trait): copy instead of move
let x: i32 = 5;
let y = x; // Copy
println!("{} {}", x, y); // Both valid
}
fn borrowing_rules() {
let mut data = vec![1, 2, 3];
// Immutable borrow: multiple allowed simultaneously
let r1 = &data;
let r2 = &data;
println!("{} {}", r1.len(), r2.len());
// r1 and r2 go out of scope here (non-lexical lifetimes)
// Mutable borrow: only one at a time
let r3 = &mut data;
r3.push(4);
// Cannot use r3 after this if we borrow again
data.push(5); // Direct push after r3 dropped
}
// The borrow checker prevents dangling references at compile time
fn no_dangling_refs() {
let reference;
{
let value = 42;
reference = &value; // Compile error: value doesn't live long enough
} // value dropped here, reference would dangle
}
Lifetimes
// Lifetimes annotate how long references are valid.
// The compiler infers most lifetimes; explicit annotations needed when it cannot.
// Returns the longer of two string slices
// 'a means: output reference lives as long as the shorter of s1 and s2
fn longest(s1: &'a str, s2: &'a str) -> &'a str {
if s1.len() >= s2.len() { s1 } else { s2 }
}
// Struct holding a reference requires lifetime annotation
struct WordParser {
text: &'a str, // Parser cannot outlive the text it references
}
impl WordParser {
fn new(text: &'a str) -> Self {
WordParser { text }
}
fn first_word(&self) -> &str { // Lifetime elided: same as &'a str
let bytes = self.text.as_bytes();
for (i, &byte) in bytes.iter().enumerate() {
if byte == b' ' { return &self.text[..i]; }
}
self.text
}
}
// 'static lifetime: valid for entire program duration
// String literals are 'static
let s: &'static str = "I live forever";
// Common lifetime patterns in APIs:
fn split_at(s: &'a str, mid: usize) -> (&'a str, &'a str) {
(&s[..mid], &s[mid..])
}
Traits and Generics
use std::fmt;
use std::ops::Add;
// Trait: defines shared behavior (like an interface + default methods)
trait Summary {
fn summarize_author(&self) -> String;
fn summarize(&self) -> String { // Default implementation
format!("(Read more from {}...)", self.summarize_author())
}
}
struct Article { author: String, headline: String, content: String }
impl Summary for Article {
fn summarize_author(&self) -> String { self.author.clone() }
fn summarize(&self) -> String {
format!("{}, by {} — {}", self.headline, self.author, &self.content[..100])
}
}
// Trait bounds: generic functions that work on any type implementing a trait
fn notify(item: &impl Summary) { // Shorthand for T: Summary
println!("Breaking news: {}", item.summarize());
}
fn notify_generic(item: &T) { // Multiple bounds
println!("{}", item);
}
// Generic struct with trait bound
struct Pair { first: T, second: T }
impl Pair {
fn cmp_display(&self) {
if self.first >= self.second {
println!("Largest: {}", self.first);
} else {
println!("Largest: {}", self.second);
}
}
}
// Common standard library traits:
// Clone, Copy, Debug, Display, PartialEq, Eq, Hash, PartialOrd, Ord
// Iterator, IntoIterator, From/Into, AsRef/AsMut, Deref, Send, Sync
Error Handling — Result and the ? Operator
use std::fs::File;
use std::io::{self, Read};
use std::num::ParseIntError;
// Rust has no exceptions. Errors are values: Result or Option
#[derive(Debug)]
enum AppError {
Io(io::Error),
Parse(ParseIntError),
Custom(String),
}
impl fmt::Display for AppError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self {
AppError::Io(e) => write!(f, "I/O error: {}", e),
AppError::Parse(e) => write!(f, "Parse error: {}", e),
AppError::Custom(s) => write!(f, "Error: {}", s),
}
}
}
impl From for AppError { fn from(e: io::Error) -> Self { AppError::Io(e) } }
impl From for AppError { fn from(e: ParseIntError) -> Self { AppError::Parse(e) } }
// ? operator: propagates error if Err, unwraps if Ok
fn read_number_from_file(path: &str) -> Result {
let mut file = File::open(path)?; // io::Error -> AppError::Io via From
let mut contents = String::new();
file.read_to_string(&mut contents)?;
let num = contents.trim().parse::()?; // ParseIntError -> AppError::Parse
Ok(num * 2)
}
// Combinators for Option and Result
fn process(s: &str) -> Option {
s.find("needle")
.map(|pos| pos + 6)
.filter(|&pos| pos < s.len())
}
// anyhow crate: ergonomic error handling for applications
// thiserror crate: derive macros for custom error types
Async/Await and Tokio
use tokio::time::{sleep, Duration};
use tokio::task;
// async fn returns a Future — lazy, does nothing until awaited
async fn fetch_data(url: &str) -> Result {
let response = reqwest::get(url).await?;
let text = response.text().await?;
Ok(text)
}
// tokio::main: starts async runtime on current thread
#[tokio::main]
async fn main() {
// Concurrent tasks (not sequential)
let (result1, result2) = tokio::join!(
fetch_data("https://api.example.com/a"),
fetch_data("https://api.example.com/b"),
);
// Spawn a background task
let handle = task::spawn(async {
sleep(Duration::from_secs(1)).await;
"background work done"
});
let result = handle.await.unwrap();
println!("{}", result);
// select!: wait for first of multiple futures
tokio::select! {
_ = sleep(Duration::from_millis(100)) => println!("Timeout"),
data = fetch_data("https://api.example.com") => {
println!("Got data: {:?}", data);
}
}
}
Memory Safety Without GC
| Feature | How Rust achieves it | Vs C/C++ |
|---|---|---|
| No use-after-free | Ownership system: value dropped when owner goes out of scope | C++: manual delete; easy to miss |
| No dangling pointers | Borrow checker ensures references cannot outlive referents | C: returning pointer to local = UB |
| No data races | Only one &mut reference allowed at a time; Send/Sync trait enforcement | C++: no compile-time protection |
| No buffer overflows | Bounds checking on slice access (debug + release) | C: out-of-bounds is UB |
| No null pointer deref | No null: use Option<T>; must handle None explicitly | C: million-dollar mistake |
| No memory leaks | Drop trait runs deterministically when owner drops | GC: non-deterministic; no GC: manual |
Frequently Asked Questions
What is Rust's ownership system and how does it prevent memory bugs without a garbage collector?
Rust's ownership system is a set of compile-time rules that guarantee memory safety without a garbage collector or runtime overhead. Three rules: (1) Each value has exactly one owner. (2) When the owner goes out of scope, the value is dropped and memory is freed automatically. (3) Ownership can be transferred (moved) but not shared — once you move a value to a new owner, the original variable is invalidated. These rules eliminate: use-after-free (the compiler knows the owner went out of scope), double-free (only one owner, only one drop), and memory leaks (the owner always drops). For sharing without moving, Rust uses references (borrowing). The borrow checker enforces: you can have either any number of immutable references OR exactly one mutable reference at a time — never both. This eliminates data races at compile time because a data race requires two concurrent accesses where at least one is a write, and the borrow rules prevent that state at compile time. Performance: there is no garbage collector, no reference counting overhead (unless you explicitly use Rc/Arc), and no runtime checks. Memory is freed deterministically at scope exit (RAII). This is why Rust is used for systems programming where you need C-level performance and control: OS kernels, device drivers, game engines, WebAssembly, and network proxies.
What is the difference between Box, Rc, and Arc in Rust?
Box<T>, Rc<T>, and Arc<T> are Rust's three smart pointer types for heap allocation and shared ownership. Box<T>: allocates T on the heap, single ownership. When the Box goes out of scope, the heap allocation is freed. Use when: value is too large for the stack; recursive types (linked list, tree where the size is not known at compile time); when you need heap allocation for a trait object (Box<dyn Trait>). Zero runtime overhead beyond the allocation. Rc<T> (Reference Counted): allows multiple owners in single-threaded code. Maintains a reference count; drops when count reaches zero. Not Send — cannot be sent across threads (the counter is not atomic). Use for shared ownership within a single thread (e.g., a graph where multiple nodes hold references to the same node). Rc::clone() increments the counter (cheap); does not clone the data. RefCell<T> pairs with Rc for interior mutability — runtime-checked borrow rules instead of compile-time. Arc<T> (Atomic Reference Counted): like Rc but with an atomic counter, making it thread-safe (implements Send + Sync). Higher cost than Rc due to atomic operations on the counter. Use for shared ownership across threads. Pairs with Mutex<T> or RwLock<T> for mutability. Common pattern: Arc<Mutex<T>> for shared mutable state across threads. Summary: Box = single owner, heap. Rc = shared owner, single thread. Arc = shared owner, multi-thread.
How does Rust's async/await differ from other languages and why is it zero-cost?
Rust's async/await compiles async functions into state machines rather than allocating a heap object per suspension point. An async fn is syntactic sugar for a function that returns an impl Future. When you .await a future, the compiler transforms the function into a state machine where each suspension point (each .await) is a state. The state machine is stored on the stack of the task, not the heap. In contrast: Python and JavaScript async/await allocate a coroutine object on the heap for every async function call; Go goroutines allocate a heap-based goroutine struct with a 2-8 KB stack; Python's asyncio and Node.js event loops have a runtime overhead. Rust futures are lazy — a Future does nothing until polled. The async runtime (Tokio, async-std) calls poll() on the root future, which drives the state machine forward until it either completes or returns Pending (waiting for I/O). No runtime is built into Rust itself — you choose the executor. Tokio provides: a work-stealing thread pool, an I/O reactor using epoll/kqueue, and timer wheel. Zero-cost means: if your code has no async, you pay nothing for the async infrastructure. If you have async, you pay only for the actual state machine logic, not for an interpreter or GC. The main limitation: async functions return opaque Future types, which can make type signatures complex and require boxing (Box<dyn Future>) at trait boundaries — the ecosystem is working on async traits becoming stable.
{
“@context”: “https://schema.org”,
“@type”: “FAQPage”,
“mainEntity”: [
{
“@type”: “Question”,
“name”: “What is Rust’s ownership system and how does it prevent memory bugs without a garbage collector?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Rust’s ownership system is a set of compile-time rules that guarantee memory safety without a garbage collector or runtime overhead. Three rules: (1) Each value has exactly one owner. (2) When the owner goes out of scope, the value is dropped and memory is freed automatically. (3) Ownership can be transferred (moved) but not shared — once you move a value to a new owner, the original variable is invalidated. These rules eliminate: use-after-free (the compiler knows the owner went out of scope), double-free (only one owner, only one drop), and memory leaks (the owner always drops). For sharing without moving, Rust uses references (borrowing). The borrow checker enforces: you can have either any number of immutable references OR exactly one mutable reference at a time — never both. This eliminates data races at compile time because a data race requires two concurrent accesses where at least one is a write, and the borrow rules prevent that state at compile time. Performance: there is no garbage collector, no reference counting overhead (unless you explicitly use Rc/Arc), and no runtime checks. Memory is freed deterministically at scope exit (RAII). This is why Rust is used for systems programming where you need C-level performance and control: OS kernels, device drivers, game engines, WebAssembly, and network proxies.”
}
},
{
“@type”: “Question”,
“name”: “What is the difference between Box, Rc, and Arc in Rust?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Box, Rc, and Arc are Rust’s three smart pointer types for heap allocation and shared ownership. Box: allocates T on the heap, single ownership. When the Box goes out of scope, the heap allocation is freed. Use when: value is too large for the stack; recursive types (linked list, tree where the size is not known at compile time); when you need heap allocation for a trait object (Box). Zero runtime overhead beyond the allocation. Rc (Reference Counted): allows multiple owners in single-threaded code. Maintains a reference count; drops when count reaches zero. Not Send — cannot be sent across threads (the counter is not atomic). Use for shared ownership within a single thread (e.g., a graph where multiple nodes hold references to the same node). Rc::clone() increments the counter (cheap); does not clone the data. RefCell pairs with Rc for interior mutability — runtime-checked borrow rules instead of compile-time. Arc (Atomic Reference Counted): like Rc but with an atomic counter, making it thread-safe (implements Send + Sync). Higher cost than Rc due to atomic operations on the counter. Use for shared ownership across threads. Pairs with Mutex or RwLock for mutability. Common pattern: Arc<Mutex> for shared mutable state across threads. Summary: Box = single owner, heap. Rc = shared owner, single thread. Arc = shared owner, multi-thread.”
}
},
{
“@type”: “Question”,
“name”: “How does Rust’s async/await differ from other languages and why is it zero-cost?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Rust’s async/await compiles async functions into state machines rather than allocating a heap object per suspension point. An async fn is syntactic sugar for a function that returns an impl Future. When you .await a future, the compiler transforms the function into a state machine where each suspension point (each .await) is a state. The state machine is stored on the stack of the task, not the heap. In contrast: Python and JavaScript async/await allocate a coroutine object on the heap for every async function call; Go goroutines allocate a heap-based goroutine struct with a 2-8 KB stack; Python’s asyncio and Node.js event loops have a runtime overhead. Rust futures are lazy — a Future does nothing until polled. The async runtime (Tokio, async-std) calls poll() on the root future, which drives the state machine forward until it either completes or returns Pending (waiting for I/O). No runtime is built into Rust itself — you choose the executor. Tokio provides: a work-stealing thread pool, an I/O reactor using epoll/kqueue, and timer wheel. Zero-cost means: if your code has no async, you pay nothing for the async infrastructure. If you have async, you pay only for the actual state machine logic, not for an interpreter or GC. The main limitation: async functions return opaque Future types, which can make type signatures complex and require boxing (Box) at trait boundaries — the ecosystem is working on async traits becoming stable.”
}
}
]
}