Ticket booking systems handle some of the most demanding concurrency scenarios in software engineering. When a popular concert goes on sale, thousands of users simultaneously compete for a limited number of seats. Getting the low level design right means solving seat inventory management, preventing double-booking, handling payment atomicity, and scaling to traffic spikes that can be 100x normal load. This post walks through the core components and the engineering decisions behind each.
Seat Map Model
The foundation of a ticket booking system is the seat inventory model. A venue is broken down into a hierarchical structure: venue → section → row → seat. Each seat record in the database carries: seat_id, event_id, venue_id, section, row, column, type (floor, balcony, VIP, accessible), price_tier, and status (available, held, sold, blocked).
Storing seat status purely in a relational database is too slow under high-demand conditions. The canonical approach is to project the seat map into Redis. Each seat gets a key seat:{event_id}:{seat_id} with a value encoding status and holder. Redis hash structures let you fetch an entire section's availability in a single HGETALL command. The database remains the system of record; Redis is the fast read layer that absorbs the thundering herd at sale time.
Seat types drive pricing and display logic. VIP sections carry different seat objects with amenity flags. Accessible seats require adjacency metadata so a user requesting wheelchair seating gets seats next to companion seats. The data model needs to express these spatial relationships, typically as a group_id linking adjacent seats that must be sold together or adjacent to each other.
Hold Mechanism
The hold mechanism is the most critical piece of the system. When a user selects seats, those seats must be reserved for them temporarily without being permanently committed. The canonical implementation uses Redis atomic operations:
SETNX seat:{event_id}:{seat_id} {session_id}
EXPIRE seat:{event_id}:{seat_id} 600
SETNX (set if not exists) is atomic. If two users attempt to hold the same seat simultaneously, exactly one succeeds. The TTL of 600 seconds (10 minutes) gives the user time to complete checkout. If they abandon the flow, the seat automatically returns to available when the key expires.
For selecting multiple seats atomically, use a Lua script executed via EVAL. The script checks all requested seats are available, then sets all holds in one atomic transaction. If any seat is already held, the script sets none and returns a failure code. This prevents partial holds where a user gets 3 of 4 seats they wanted.
Explicit release happens when the user cancels their selection or navigates away. The client sends a release request; the server deletes the Redis keys and logs the release. TTL expiry is the fallback for abandoned sessions. A background job reconciles Redis state with the database periodically to catch any drift.
Checkout and Payment
Once seats are held, the checkout flow must maintain those holds for the duration of the payment process. The hold TTL should be extended when the user reaches the payment form—extend to 15 minutes from the moment payment details are entered. This extension is conditional: only extend if the key still exists and still maps to the same session, preventing a race where an expired hold is re-extended by a stale client.
On payment success, the transition from held to sold must be atomic across both Redis and the database. The sequence: (1) charge the payment method, (2) on charge success, write a tickets record in the DB within a transaction, (3) update seat status to sold in the DB, (4) update Redis key to sold. If the DB write fails after a successful charge, a compensating transaction must refund the payment. This is the classic distributed transaction problem; using an outbox pattern with a reliable message queue reduces the failure window.
On payment failure, release the hold immediately and return the seats to available. The user should be shown the seat map again, but their previously selected seats may now be gone. Design the UI to handle this gracefully with clear messaging.
Barcode and QR Code Generation
Each issued ticket gets a unique, unforgeable identifier. The ticket record includes: ticket_id (UUID), event_id, seat_id, buyer_id, issued_at, and current_owner_id. The QR code payload is not just the ticket_id—it includes an HMAC signature to prevent forgery:
payload = ticket_id + ":" + event_id + ":" + issued_at
qr_data = payload + ":" + HMAC_SHA256(payload, secret_key)
At venue entry, the scanner decodes the QR, recomputes the HMAC, and verifies it matches. A valid signature proves the ticket was issued by the system. The scanner also checks a real-time revocation list (tickets that have been transferred or refunded) via an API call or a locally cached bloom filter updated every 60 seconds.
Ticket transfer updates current_owner_id in the database and invalidates the old QR code by adding the old ticket_id to the revocation list. A new QR code is generated for the new owner. The ownership chain (original buyer, all transfers) is preserved in a ticket_transfers table for dispute resolution.
High-Traffic Event Handling
Major event on-sales present a traffic pattern unlike normal operations: millions of users arrive at exactly the same second. The architecture must handle this without crashing and without creating a stampede that overwhelms payment infrastructure.
The virtual waiting room is the primary mitigation. Users who arrive before sale time are placed in a queue. At sale time, tokens are issued in batches sized to what the checkout and payment systems can actually process concurrently (e.g., 5,000 active checkout sessions). Users with tokens proceed to the seat selection flow; users without tokens see their queue position and an estimated wait time.
The seat map itself is served as a static asset—a pre-rendered JSON snapshot of available seats updated every 30 seconds and served from CDN. Users load the seat map from CDN, not the origin. Only the hold request and checkout flow hit the origin servers. This keeps origin traffic proportional to actual purchase attempts, not page views.
Separate server pools handle high-demand events vs. regular inventory. A high-demand event triggers auto-scaling of the hold and checkout services. Circuit breakers protect the payment provider integration from being overwhelmed; if the payment provider is backing up, the system throttles new checkout sessions rather than queuing unbounded requests.
Scalability
The read path for seat availability is read-heavy and latency-sensitive. Redis cluster sharded by event_id ensures that all seat state for a given event lives on a single shard, allowing atomic Lua scripts across seats within an event. Cross-event queries are rare and can tolerate higher latency.
The write path (holds and purchases) is serialized per event through a queue. Each event has a dedicated queue; workers process hold and purchase requests for that event sequentially, eliminating the need for distributed locking. Worker count per event scales with event size. For a 70,000-seat stadium, you need far more parallelism than for a 200-seat theater—partitioning the queue by seat section allows parallel processing while preventing cross-section conflicts.
Database read replicas serve availability queries. The primary handles writes only. For reporting (total revenue, seats sold by section, etc.), a separate analytics replica or data warehouse receives a stream of events from the write path via change data capture.
Resale and Transfer
Authorized resale requires the platform to act as an intermediary to prevent fraud and price gouging (where policy dictates). The transfer flow: seller initiates transfer, specifying buyer email or marketplace listing. For direct transfer, buyer receives an email with a claim link. On claim, ownership updates atomically—old QR invalidated, new QR issued to buyer.
For marketplace resale, the ticket enters an escrow state: listed. The original ticket is held by the platform. When a buyer purchases the listing, the platform executes the transfer atomically with the payment. If the sale falls through, the ticket reverts to the original owner. Resale price caps are enforced at the application layer during listing creation.
The ownership chain in ticket_transfers records every transfer: transfer_id, ticket_id, from_user_id, to_user_id, transfer_type (direct, marketplace, gift), price, transferred_at. This supports chargeback defense and regulatory reporting in jurisdictions with resale regulations.
Cancellation and Refund
Cancellation policy is a first-class data model concern. Policy rules attach to event-ticket-type combinations: full refund before 30 days, 50% refund 7–30 days, no refund inside 7 days. The cancellation service evaluates policy before processing.
On cancellation: (1) validate refund eligibility against policy, (2) mark ticket as cancelled in DB, (3) add ticket to QR revocation list, (4) return seat to available pool in both DB and Redis, (5) initiate refund via payment provider. Steps 2–4 happen in a database transaction. The refund is initiated after the transaction commits—if the refund fails, a retry queue ensures eventual delivery.
Event cancellation by the organizer triggers bulk cancellation. A background job processes tickets in batches, marks all as cancelled, returns seats (moot for a cancelled event, but keeps data consistent), and enqueues refunds. Users are notified by email as their batch is processed. For events with tens of thousands of tickets, this job runs for minutes; progress tracking lets support staff monitor completion.
{ “@context”: “https://schema.org”, “@type”: “FAQPage”, “mainEntity”: [ { “@type”: “Question”, “name”: “How do you implement a seat hold mechanism to prevent double-booking in a ticket system?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “Use optimistic locking or a distributed lock (e.g., Redis SETNX with TTL) to temporarily reserve a seat when a user initiates checkout. The hold expires after a fixed window (typically 10 minutes) if payment is not completed, releasing the seat back to the pool. Store hold state in a fast in-memory store and confirm atomically with the payment transaction using a compare-and-swap update on the seat record.” } }, { “@type”: “Question”, “name”: “How do you handle high-traffic event sales with a virtual waiting room?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “Queue incoming users in a virtual waiting room backed by a sorted set (e.g., Redis ZADD with entry timestamp as score). A background worker drains the queue at a controlled rate and issues time-limited tokens granting access to the checkout flow. This decouples traffic spikes from the booking service, prevents thundering-herd overload, and provides users a fair first-come-first-served experience with an estimated wait time.” } }, { “@type”: “Question”, “name”: “How do you generate barcodes or QR codes with anti-forgery guarantees for tickets?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “Generate a cryptographically signed token per ticket: concatenate ticket ID, event ID, seat, and issuance timestamp, then sign with HMAC-SHA256 using a server-side secret. Encode the result as a QR code. At the venue, the scanner verifies the signature server-side (or against a locally synced allowlist) before admitting the holder. Rotate signing keys periodically and invalidate tokens on transfer or refund.” } }, { “@type”: “Question”, “name”: “How do you achieve atomicity between payment processing and inventory reservation in a ticket booking system?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “Use a two-phase approach: first reserve the seat (soft hold in DB with status=PENDING and expiry), then charge the payment gateway. On success, transition the seat to CONFIRMED in the same DB transaction that records the payment receipt. On failure or timeout, a cleanup job rolls back PENDING holds. For cross-service atomicity, apply the Saga pattern with compensating transactions so a failed payment triggers an explicit seat-release event.” } }, { “@type”: “Question”, “name”: “How do you design a ticket transfer and resale workflow?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “Model transfer as a two-step handoff: the current owner initiates a transfer request that generates a single-use claim link (token stored with expiry). The recipient accepts the link, at which point the system atomically reassigns ownership and invalidates the sender’s barcode, issuing a new signed token to the recipient. For resale, enforce price caps via policy fields, route transactions through the platform to collect fees, and trigger KYC checks above configurable thresholds to comply with regulations.” } } ] }See also: Airbnb Interview Guide 2026: Search Systems, Trust and Safety, and Full-Stack Engineering
See also: Coinbase Interview Guide