Choosing the right real-time mechanism is a frontend system design question that comes up routinely in senior interviews. WebSocket is the default for most engineers, but it is not always the right choice. Understanding the tradeoffs is professional baseline.
The three main options
WebSocket
Persistent bidirectional TCP connection upgraded from HTTP. Both client and server can send messages at any time.
Server-Sent Events (SSE)
HTTP-based one-way (server → client) streaming. Connection stays open; server pushes events as text-stream.
Long polling
Client makes HTTP request; server holds it open until data is ready (or timeout); response triggers next request.
When to use WebSocket
- Truly bidirectional communication (chat, gaming)
- Low latency requirements
- Many small messages per second per connection
- You can manage state on a stateful server
Examples: Slack messaging, Figma cursors, multiplayer game state.
When to use SSE
- Server-to-client only (notifications, dashboards, AI streaming responses)
- Want to leverage HTTP infrastructure (proxies, caches, retries)
- Simpler than WebSocket for one-way
- Built-in browser auto-reconnect
Examples: stock tickers, live blog updates, LLM streaming responses (ChatGPT-style).
When to use long polling
- Infrequent updates (seconds to minutes)
- Backward compat with environments that block WebSocket
- Simplicity matters more than latency
- Stateless backend
Examples: legacy systems, low-priority background updates.
Performance comparison
| Mechanism | Latency | Server CPU | Server Memory | Bandwidth |
|---|---|---|---|---|
| WebSocket | <50ms | Low per message | Per-connection state | Lowest (binary) |
| SSE | <100ms | Low per message | Per-connection state | Higher (HTTP framing) |
| Long polling | 200ms–1s | Higher (HTTP setup per cycle) | Lower (no persistent state) | Higher (HTTP overhead) |
Common WebSocket gotchas
- No automatic reconnect — must implement (libraries: reconnecting-websocket)
- Proxies may break connections (kill idle connections)
- Auth: standard HTTP cookies work for upgrade; ongoing auth requires custom logic
- Server scaling: stateful — connection-aware load balancing or sticky sessions
Common SSE gotchas
- One-way only (client must use separate POST for upstream)
- HTTP/1.1 has 6-connections-per-domain limit (use HTTP/2 to avoid)
- Server must keep connections open — same scaling challenge as WebSocket
- EventSource API is well-supported but has quirks (CORS, custom headers)
Modern alternatives
WebTransport
Newer (2023+) standard. UDP-based, lower latency than WebSocket. Browser support is limited but growing. Useful for gaming or real-time media.
WebRTC data channels
Peer-to-peer with TURN fallback. Use for direct browser-to-browser communication.
HTTP/3 streaming
HTTP/3 over QUIC supports streaming. SSE on HTTP/3 is meaningfully faster than on HTTP/1.1.
The “always WebSocket” antipattern
Engineers default to WebSocket out of habit. Often SSE or even long polling is simpler and sufficient.
Default to SSE for one-way streams (notifications, AI responses). Reserve WebSocket for genuine bidirectional needs.
Auth strategies
- Cookie-based: works for same-origin upgrade requests
- Token in URL query string: easy but logs the token in server access logs (avoid)
- First-message auth: client sends auth message after connection; server validates before processing
- Custom subprotocol header: auth in Sec-WebSocket-Protocol
Frequently Asked Questions
Why does ChatGPT use SSE and not WebSocket for streaming responses?
One-way only (server → client tokens). SSE works on standard HTTP infrastructure (CDNs, proxies). Built-in auto-reconnect. Simpler than WebSocket for the use case.
Should I use Socket.io?
For new projects, native WebSocket is usually enough. Socket.io adds reconnect, fallback, namespaces — useful but adds bundle size. Decide based on need.
How does this affect frontend system design interviews?
Be ready to articulate which mechanism fits the use case. “WebSocket because real-time” is shallow. “SSE because one-way streaming and simpler to scale” is rich.