What Is a Photo Sharing Service?
A photo sharing service lets users upload, organize, and share photographs with fine-grained access control. Think Instagram or Google Photos: users post images to feeds or albums, other users follow, like, and comment. The system must handle bursty write traffic during peak hours, serve resized thumbnails at low latency, and support social graph queries (who follows whom, whose feed should show this photo).
Data Model / Schema
CREATE TABLE users (
user_id BIGINT PRIMARY KEY AUTO_INCREMENT,
username VARCHAR(64) UNIQUE NOT NULL,
created_at TIMESTAMP DEFAULT NOW()
);
CREATE TABLE photos (
photo_id UUID PRIMARY KEY,
owner_id BIGINT REFERENCES users(user_id),
storage_key VARCHAR(1024) NOT NULL, -- pointer to object storage
caption TEXT,
visibility ENUM('public','followers','private') DEFAULT 'public',
created_at TIMESTAMP DEFAULT NOW()
);
CREATE TABLE photo_variants (
photo_id UUID REFERENCES photos(photo_id),
variant VARCHAR(32) NOT NULL, -- e.g. 'thumb_200', 'medium_800', 'original'
url TEXT NOT NULL,
PRIMARY KEY (photo_id, variant)
);
CREATE TABLE follows (
follower_id BIGINT REFERENCES users(user_id),
followee_id BIGINT REFERENCES users(user_id),
created_at TIMESTAMP DEFAULT NOW(),
PRIMARY KEY (follower_id, followee_id)
);
CREATE TABLE feed_items (
user_id BIGINT,
photo_id UUID,
score BIGINT, -- timestamp-based or ranked
PRIMARY KEY (user_id, photo_id)
);
Core Algorithm: Upload and Feed Fanout
- Upload. Client uploads original via pre-signed URL to object storage. A completion event triggers the transcoding worker.
- Thumbnail generation. The worker produces resized variants (200px thumb, 800px medium) using ImageMagick or libvips, stores them in the same bucket under variant-specific keys, and writes rows to
photo_variants. - Feed fanout (push model). A fanout service reads the uploader's follower list from the
followstable and writes afeed_itemsrow for each follower. For celebrity accounts with millions of followers, a hybrid push-pull model is used: push to active users, generate lazily for inactive ones. - Feed read.
GET /feedqueriesfeed_itemsfor the requesting user, ordered by score DESC, paginated with a cursor.
Failure Handling
- Thumbnail failure: Variants are generated asynchronously. The photo record is marked
readyeven if only the original exists; variant generation retries with exponential backoff. - Fanout lag: Feed writes are eventually consistent. A brief delay (seconds) is acceptable. SLA is communicated to product as eventual consistency.
- Hotspot followers (celebrities): Fanout for accounts with >1M followers is sharded across multiple worker partitions to avoid single-consumer lag.
- Database overload: Read replicas serve feed queries. The primary handles writes only.
Scalability Considerations
- CDN for variants: All photo variant URLs point to CDN edges. Origin object storage is rarely hit after the first cache warm-up.
- Sharded feed store:
feed_itemsis sharded byuser_id. Each shard fits on one Redis sorted set for sub-millisecond range queries. - Storage cost: Original photos are tiered to cold storage after 180 days; CDN-cached variants cover almost all reads.
- Search: Photo captions and tags are indexed in Elasticsearch for hashtag and keyword search without touching the relational DB.
Summary
A photo sharing service combines object storage for raw assets, an async variant-generation pipeline for thumbnails, and a fanout-on-write feed architecture backed by sharded Redis sorted sets. The hybrid push-pull model for celebrity accounts and CDN-first read path are the key design decisions that make the system both fast and cost-effective at scale.
{
“@context”: “https://schema.org”,
“@type”: “FAQPage”,
“mainEntity”: [
{
“@type”: “Question”,
“name”: “What is the high-level architecture of a photo sharing platform?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “A photo sharing platform consists of an upload service, an object store for original and processed images, an image processing pipeline for resizing and format conversion, a metadata database for photo records and user relationships, a feed generation service, and a CDN for delivery. Companies like Meta and Snap operate at massive scale with dedicated infrastructure for each layer.”
}
},
{
“@type”: “Question”,
“name”: “How do photo sharing platforms handle image processing at scale?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “After upload, photos are passed to an asynchronous processing pipeline that generates multiple resolutions and formats (e.g., WebP, AVIF) optimized for different devices and network conditions. Message queues decouple upload from processing. Results are stored in object storage and metadata is updated to reference each variant. This pattern is used by Instagram (Meta) and Snapchat to serve billions of images daily.”
}
},
{
“@type”: “Question”,
“name”: “How is a photo feed generated efficiently in a photo sharing system?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Feed generation uses either a push (fanout-on-write) or pull (fanout-on-read) model. For users with many followers, pull is more efficient; for users with few followers, push pre-computes feeds. Hybrid approaches store precomputed feeds in a cache (e.g., Redis) and merge in real-time data at read time. Airbnb and Meta both use variations of this hybrid feed architecture.”
}
},
{
“@type”: “Question”,
“name”: “What are the main challenges of storing and serving user photos at scale?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Key challenges include storage cost management through deduplication and tiered storage, handling hot content with CDN caching and cache stampede prevention, ensuring low-latency delivery globally, and moderating content at scale using automated classifiers. Photo metadata (tags, geolocation, EXIF) must be indexed for search while keeping the metadata store performant under heavy read load.”
}
}
]
}
See also: Meta Interview Guide 2026: Facebook, Instagram, WhatsApp Engineering
See also: Snap Interview Guide
See also: Airbnb Interview Guide 2026: Search Systems, Trust and Safety, and Full-Stack Engineering