What is a Storage Quota System?
A storage quota system tracks and enforces per-user limits on resource consumption: disk storage (Dropbox, Google Drive), API calls per month, emails sent, or any countable resource. The design must handle concurrent writes without race conditions, provide real-time usage visibility, and enforce limits at high throughput without a performance bottleneck.
Requirements
- Track storage used per user (bytes)
- Enforce quotas: reject uploads that would exceed the limit
- Show current usage vs limit in real time
- Quota tiers: free (5GB), pro (100GB), enterprise (unlimited)
- Handle concurrent uploads: two simultaneous 3GB uploads for a 5GB-limit user must not both succeed
- Usage history for billing: monthly storage-byte-hours
Data Model
UserQuota(
user_id UUID PRIMARY KEY,
plan VARCHAR,
storage_limit_bytes BIGINT, -- NULL = unlimited
storage_used_bytes BIGINT DEFAULT 0, -- denormalized current usage
updated_at TIMESTAMPTZ
)
StorageObject(
object_id UUID PRIMARY KEY,
user_id UUID NOT NULL,
key VARCHAR NOT NULL,
size_bytes BIGINT NOT NULL,
created_at TIMESTAMPTZ,
deleted_at TIMESTAMPTZ
)
UsageDailySnapshot(
user_id UUID,
date DATE,
storage_bytes BIGINT,
PRIMARY KEY (user_id, date)
)
Atomic Quota Check-and-Reserve
Two concurrent 3GB uploads for a user at 4.9GB (limit 5GB): both read usage=4.9GB, both check 4.9+3 less than 5 = fails, so one should succeed and one should fail. Solution: atomic conditional UPDATE:
def reserve_quota(user_id, bytes_needed):
result = db.execute('''
UPDATE UserQuota
SET storage_used_bytes = storage_used_bytes + :bytes,
updated_at = NOW()
WHERE user_id = :uid
AND (storage_limit_bytes IS NULL
OR storage_used_bytes + :bytes <= storage_limit_bytes)
RETURNING storage_used_bytes, storage_limit_bytes
''', uid=user_id, bytes=bytes_needed)
if not result:
raise QuotaExceeded('Storage quota exceeded')
return result
def release_quota(user_id, bytes_freed):
db.execute('''
UPDATE UserQuota
SET storage_used_bytes = GREATEST(0, storage_used_bytes - :bytes)
WHERE user_id = :uid
''', uid=user_id, bytes=bytes_freed)
def upload_file(user_id, file_key, file_bytes):
reserve_quota(user_id, len(file_bytes))
try:
s3.put_object(key=file_key, body=file_bytes)
db.insert(StorageObject(user_id=user_id, key=file_key,
size_bytes=len(file_bytes)))
except Exception:
release_quota(user_id, len(file_bytes))
raise
Redis-Based Quota for High Throughput
For API-call-rate quotas at 100K checks/second, Redis atomic operations outperform DB transactions. Use a Redis Lua script for an atomic check-and-increment:
# Lua script: atomically check limit and increment
# Returns -1 if over limit, otherwise returns new total
QUOTA_LUA = """
local current = tonumber(redis.call('GET', KEYS[1]) or 0)
local limit = tonumber(ARGV[2])
local delta = tonumber(ARGV[1])
if limit > 0 and current + delta > limit then
return -1
end
return redis.call('INCRBY', KEYS[1], delta)
"""
def check_and_increment(user_id, amount, limit):
key = 'quota:' .. user_id .. ':storage'
result = redis.run_lua_script(QUOTA_LUA, keys=[key],
args=[amount, limit])
if result == -1:
raise QuotaExceeded()
return result
Usage History for Billing
-- Nightly snapshot job
INSERT INTO UsageDailySnapshot (user_id, date, storage_bytes)
SELECT user_id, CURRENT_DATE, storage_used_bytes FROM UserQuota
ON CONFLICT (user_id, date) DO UPDATE
SET storage_bytes = EXCLUDED.storage_bytes;
-- Monthly billing: average GB-months
SELECT user_id,
SUM(storage_bytes) / (1024.0^3) / 30 AS avg_gb_months
FROM UsageDailySnapshot
WHERE date >= date_trunc('month', CURRENT_DATE)
GROUP BY user_id;
Key Design Decisions
- Conditional UPDATE for atomic reservation — prevents over-quota under concurrency without locks
- Denormalized storage_used_bytes — never compute with SUM on the hot path; maintain incrementally
- Redis Lua for high-throughput quotas — atomic, sub-millisecond; sync to DB periodically
- Daily snapshots for billing — exact billing requires historical usage, not just current state
- GREATEST(0, …) on decrement — prevents negative quota from race conditions or double-deletes
{“@context”:”https://schema.org”,”@type”:”FAQPage”,”mainEntity”:[{“@type”:”Question”,”name”:”How do you prevent two concurrent uploads from both exceeding a storage quota?”,”acceptedAnswer”:{“@type”:”Answer”,”text”:”Use a conditional UPDATE: UPDATE UserQuota SET storage_used_bytes = storage_used_bytes + :delta WHERE user_id=:uid AND storage_used_bytes + :delta <= storage_limit_bytes. If the row is updated (rowcount=1), the reservation succeeded. If not updated (rowcount=0), the quota would be exceeded. The WHERE clause and SET happen atomically in the database — no other transaction can read-then-write in between. This avoids the race condition where two concurrent reads both see sufficient space and both proceed.”}},{“@type”:”Question”,”name”:”How do you track storage usage without counting files on every request?”,”acceptedAnswer”:{“@type”:”Answer”,”text”:”Denormalize: maintain a storage_used_bytes column on the UserQuota table, updated incrementally on every upload (+size) and delete (-size). Never compute usage with SELECT SUM(size_bytes) FROM StorageObject WHERE user_id=:uid on the hot path — that query gets slower as the user accumulates more files. Periodically reconcile: run a background job that recomputes the correct total and updates the denormalized column, fixing any drift from race conditions or failed transactions.”}},{“@type”:”Question”,”name”:”How do you implement quota for API rate limits (requests per month)?”,”acceptedAnswer”:{“@type”:”Answer”,”text”:”Use Redis with a monthly key: quota:{user_id}:{year_month}. INCR the key on each API request. Set TTL to expire at the end of the billing period. If the counter exceeds the plan limit, return 429 Too Many Requests. For quota enforcement that must be 100% accurate (billing), synchronize to the database periodically. For soft enforcement (can tolerate slight over-limit), Redis INCR alone is sufficient. Use a Redis Lua script for atomic check-and-increment to prevent going over the limit.”}},{“@type”:”Question”,”name”:”How do you handle quota for a file deletion that frees space?”,”acceptedAnswer”:{“@type”:”Answer”,”text”:”On file delete: UPDATE UserQuota SET storage_used_bytes = GREATEST(0, storage_used_bytes – :size). The GREATEST(0, …) prevents the quota from going negative due to edge cases (e.g., file was counted twice). Mark the file as deleted_at in StorageObject (soft delete) and schedule the actual S3 deletion asynchronously. Release the quota immediately so the user can upload new files; the S3 cleanup is eventually consistent.”}},{“@type”:”Question”,”name”:”How do you bill for storage usage (GB-months)?”,”acceptedAnswer”:{“@type”:”Answer”,”text”:”Take a daily snapshot of each user’s storage_used_bytes into a UsageDailySnapshot table. At billing time (end of month): sum all daily snapshots for the month and divide by the number of days. This gives average GB-months used. Multiply by the price per GB-month ($0.023 for S3 equivalent). This approach captures fluctuating usage accurately — a user who stores 100GB for 15 days and 0GB for 15 days pays for 50GB-months, not 100GB-months.”}}]}
Storage quota and resource limit system design is discussed in Google system design interview questions.
Storage quota and SaaS resource limit design is covered in Atlassian system design interview preparation.
Storage quota and S3 usage tracking design is discussed in Amazon system design interview guide.