Low Level Design: Write-Through Cache

What Is Write-Through Cache?

Write-through caching is a strategy where every write to the cache is synchronously propagated to the backing store before the write is acknowledged to the caller. The cache and the database are always in sync: if the cache contains a key, it is guaranteed to reflect the latest committed value.

Use write-through when reads vastly outnumber writes, data correctness is critical (financial balances, inventory), and the latency cost of a synchronous double-write is acceptable. It is a poor fit when the write path is latency-sensitive or when most cached items are rarely read after being written (wasted write amplification).

Architecture Overview

  • Application layer: All writes go to the cache client, never directly to the database.
  • Cache (Redis): Accepts the write, forwards it to the database synchronously (or the client does both in sequence), then confirms success.
  • Database: The ultimate source of truth; receives every write before the caller is unblocked.

In practice, most applications implement write-through in the application layer rather than relying on a cache proxy: write to DB, write to cache, return success. A distributed lock or optimistic version check prevents a race between the two steps.

Data Model

No special schema changes are needed in the database. In the cache, store a versioned value envelope:

{
  "v":    14,
  "data": { ... entity fields ... },
  "written_at": "2026-04-17T10:00:00Z"
}

The version field comes from the database row (e.g., a version column incremented on every UPDATE). It lets the cache client reject out-of-order writes using compare-and-swap.

Core Workflow

  1. The application receives a write request.
  2. Begin a database transaction; update the row and increment its version.
  3. Commit the transaction.
  4. Write the new value (with the committed version) to the cache using a compare-and-swap: only overwrite if the stored version is less than the new version.
  5. Return success to the caller.

On read, the application checks the cache first. A cache hit is always fresh by design. A cache miss (cold start or eviction) falls back to the database and repopulates the cache.

Failure Modes and Tradeoffs

  • Cache write failure after DB commit: The DB has the new value but the cache holds the old one. Mitigate by wrapping the cache write in a retry with exponential backoff. The TTL acts as a safety net — the stale entry will expire eventually.
  • Write amplification: Every write hits both systems. At high write throughput this can saturate the cache connection pool. Batch writes where possible.
  • Cold-start penalty: On a fresh cache, the first read after a write already populated the cache, so cold starts only occur for keys never written since the last eviction — the system self-warms naturally.
  • Thundering herd on eviction: If a heavily-read key is evicted between two writes, multiple readers race to repopulate it. Use a probabilistic early expiration or a cache-aside read lock.

Scalability Considerations

Write-through performs best when the cache hit rate is high and writes are infrequent relative to reads. Benchmark the ratio: if writes exceed 20-30% of total cache operations, the synchronous double-write overhead may push you toward write-behind (async) caching instead.

For multi-region deployments, write-through becomes expensive if the cache is geographically remote from the database. In that case, write to the local region's cache and replicate asynchronously, accepting a short inconsistency window between regions.

Use Redis pipelining to batch the cache write with any related key invalidations in the same round-trip, reducing latency overhead.

Summary

Write-through cache is the simplest correctness guarantee you can buy: the cache is always fresh after a write. The cost is write latency and amplification. It is the right default for read-heavy workloads where stale data carries business risk. Pair it with a short TTL and a versioned envelope to handle the edge case where the cache write fails after a successful database commit.

{
“@context”: “https://schema.org”,
“@type”: “FAQPage”,
“mainEntity”: [
{
“@type”: “Question”,
“name”: “What is a write-through cache?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “A write-through cache is a caching strategy where every write operation updates both the cache and the underlying persistent storage simultaneously. This guarantees strong consistency between the cache and the database but adds write latency since both stores must be updated before the write is acknowledged.”
}
},
{
“@type”: “Question”,
“name”: “What are the advantages of write-through caching?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Write-through caching offers strong data consistency, no risk of data loss on cache failure since the database is always up to date, and simpler invalidation logic. It is well-suited for read-heavy workloads where data must be accurate immediately after a write, such as financial transactions or inventory systems.”
}
},
{
“@type”: “Question”,
“name”: “How does write-through cache differ from write-behind cache?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “In write-through caching, data is written to the cache and the database synchronously in the same operation. In write-behind (write-back) caching, data is written to the cache first and the database write is deferred asynchronously. Write-behind has lower write latency but risks data loss if the cache fails before the database is updated.”
}
},
{
“@type”: “Question”,
“name”: “When do companies like Amazon and Uber use write-through caching?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Amazon uses write-through patterns in services where inventory accuracy is critical, such as product availability. Uber applies write-through caching in trip and pricing systems where stale data could cause incorrect fare calculations. Google also uses write-through in ad-serving systems where impression data must be durably recorded.”
}
}
]
}

See also: Scale AI Interview Guide 2026: Data Infrastructure, RLHF Pipelines, and ML Engineering

See also: Netflix Interview Guide 2026: Streaming Architecture, Recommendation Systems, and Engineering Excellence

See also: Uber Interview Guide 2026: Dispatch Systems, Geospatial Algorithms, and Marketplace Engineering

Scroll to Top