What Is a Media Player Service?
A media player service manages server-side playback state, persists resume positions across sessions and devices, selects the appropriate bitrate stream through Adaptive Bitrate (ABR) logic, and records a watch history. It abstracts the stateful aspects of media consumption away from the client so that a user can stop on a phone and resume on a TV with no manual bookmarking.
Requirements
Functional Requirements
- Persist playback position as the user watches so resume is accurate to within a few seconds.
- Return the correct resume offset when a user opens a piece of content they previously started.
- Select an initial ABR quality profile based on device type, network conditions reported by the client, and user preference.
- Record every play, pause, seek, and completion event to a watch history log.
- Support multiple concurrent sessions per user (e.g., pause on phone, resume on TV).
- Mark content as fully watched once the user reaches a configurable completion threshold (default 90%).
Non-Functional Requirements
- Position updates accepted at up to one write per 10 seconds per active session without overloading storage.
- Resume offset read latency under 20 ms P99.
- Watch history retained for 3 years; queryable by user and date.
Data Model
PlaybackSession
session_idUUID — primary key.user_id,content_id,device_id.resume_offset_msBIGINT — last known position in milliseconds.duration_msBIGINT — total content duration for completion calculation.started_at,last_updated_attimestamps.completedboolean — set when offset/duration exceeds completion threshold.
WatchEvent
event_idUUID,session_idFK,user_id,content_id.event_typeENUM: PLAY, PAUSE, SEEK, BUFFER, COMPLETE, QUALITY_CHANGE.offset_msBIGINT — position at time of event.quality_profileNULLABLE — ABR resolution label (e.g., 1080p, 720p).occurred_attimestamp.
QualityProfile
profile_id,label(e.g., 1080p),bitrate_kbps,resolution_w,resolution_h.min_bandwidth_kbps— minimum network bandwidth required for stable playback.
Core Algorithms
Position Persistence
Clients send a heartbeat with the current offset every 10 seconds. To avoid per-heartbeat database writes at scale, the service buffers position updates in Redis using a hash keyed by session_id. A background writer flushes dirty session positions to PostgreSQL every 30 seconds using an upsert on the PlaybackSession table. On session start, the service reads the resume offset directly from PostgreSQL (authoritative), falling back to Redis for freshness if the PostgreSQL value is more than 60 seconds old.
ABR Quality Selection
At session start, the client reports estimated bandwidth in kbps and device class (mobile, tablet, TV). The service selects the highest QualityProfile whose min_bandwidth_kbps is below 80% of reported bandwidth, capped by the device class ceiling (e.g., mobile capped at 720p). This initial profile is returned with the session creation response. Mid-session quality changes are driven by the player client using standard HLS or DASH ABR logic; the service only records QUALITY_CHANGE events for analytics.
Completion Detection
When a position update arrives with offset_ms / duration_ms >= 0.90, the service sets completed=true on the PlaybackSession and publishes a CONTENT_COMPLETED event to a Kafka topic. Downstream consumers (recommendation engine, progress tracker, achievement service) subscribe to this topic to react to completions without coupling to the media player service directly.
API Design
POST /v1/sessions— create session; returnssession_id,resume_offset_ms, and initial quality profile.PUT /v1/sessions/{session_id}/position— heartbeat update; body:offset_ms.POST /v1/sessions/{session_id}/events— record a WatchEvent (play, pause, seek, complete).GET /v1/users/{user_id}/history— paginated watch history ordered bylast_updated_atDESC.GET /v1/users/{user_id}/resume/{content_id}— returns resume offset for a specific content item; used on content detail page load.
Scalability
Write Path
Position heartbeats are the highest-volume write. Buffering in Redis with a 30-second flush window reduces database write IOPS by roughly 3x. Each Redis key has a TTL of 24 hours to self-clean abandoned sessions. The PostgreSQL upsert uses ON CONFLICT (session_id) DO UPDATE and is batched across all dirty sessions in a single transaction per flush cycle to minimize round-trips.
Watch History
WatchEvents are written to Kafka and consumed by a columnar store for analytics. The PostgreSQL sessions table stores only the most recent state per session, keeping it small and fast. Archived events older than 90 days are moved to cold storage via a nightly job, with a thin metadata index retained in the primary database for history list queries.
{
“@context”: “https://schema.org”,
“@type”: “FAQPage”,
“mainEntity”: [
{
“@type”: “Question”,
“name”: “How is server-side playback state persisted for a media player?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “The player emits periodic heartbeat events (e.g., every 10–30 seconds) containing the current position, bitrate, and session ID. The backend writes these to a low-latency store (Redis with TTL) for active sessions and flushes to a durable store (DynamoDB or Postgres) on pause, stop, or session end. A client-side buffer ensures events aren't lost on transient network drops.”
}
},
{
“@type”: “Question”,
“name”: “How is resume position tracking kept accurate?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Accuracy requires resolving conflicts between the last server-persisted position and the client's local cache. The system uses the higher of the two positions (last-write-wins with server as authority) unless the client has a more recent timestamp. A small rewind buffer (e.g., 5 seconds before the saved position) is applied on resume to avoid dropping the viewer into the middle of a scene.”
}
},
{
“@type”: “Question”,
“name”: “How does an ABR quality selection algorithm work?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Adaptive Bitrate (ABR) algorithms like BOLA or MPC estimate available bandwidth from segment download throughput and buffer occupancy. The algorithm selects the highest bitrate rendition whose estimated download time keeps the buffer above a safety threshold. Buffer-based controllers switch down aggressively when the buffer drains and switch up conservatively to avoid oscillation.”
}
},
{
“@type”: “Question”,
“name”: “How is watch history stored efficiently at scale?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Watch history is written as append-only events to a time-series or event store (e.g., Kafka -> S3 + columnar format). A materialized view computes the latest position per (user, content) pair for the resume feature. For history display (“continue watching” lists), a separate read model stores the top-N most recent items per user in Redis or a document store, updated asynchronously from the event stream.”
}
}
]
}
See also: Anthropic Interview Guide 2026: Process, Questions, and AI Safety
See also: Scale AI Interview Guide 2026: Data Infrastructure, RLHF Pipelines, and ML Engineering