Build a File Uploader: Chunked, Resumable, and Drag-Drop

“Build a file uploader” is one of the most popular frontend interview questions because it tests UX patterns (drag-drop, progress), file APIs (FileReader, Blob), network resilience (chunking, retry), and a real understanding of how browsers handle large data.

Functional requirements

  • Drag-and-drop or click to select
  • Show progress during upload
  • Resume interrupted uploads
  • Cap file size client-side
  • Validate MIME type
  • Multiple file selection
  • Mobile-friendly (camera capture)

Drag-drop UX

Standard pattern:

  • Drop zone with visible border
  • Visual change on drag-over (highlight, “Drop here”)
  • Standard <input type="file" /> as fallback
  • Both should accept the same file types

Edge case: drag from the desktop vs drag from another browser tab — handle both.

The basic upload

For small files (<100MB):

const formData = new FormData();
formData.append('file', file);
const res = await fetch('/upload', {
  method: 'POST',
  body: formData
});

Simple. Works. Loses everything if interrupted.

Chunked upload

For larger files:

  1. Split file into chunks (typical: 5–10 MB)
  2. Upload each chunk as a separate request
  3. Server reassembles
  4. If a chunk fails, retry only that chunk

Standards:

  • tus.io: open protocol for resumable uploads
  • S3 multipart upload: AWS-specific, well-supported
  • Custom: design your own chunking protocol

Resumable uploads

Track per-file state:

  • File ID (server-generated)
  • Total chunks
  • Uploaded chunks (set of completed indices)
  • State persisted in localStorage so refresh does not lose progress

On resume: ask server which chunks it has; upload only the missing ones.

Progress reporting

Use the upload property on XHR (fetch does not support upload progress yet):

const xhr = new XMLHttpRequest();
xhr.upload.onprogress = (e) => {
  setProgress(e.loaded / e.total);
};

For chunked uploads: aggregate progress across chunks.

Concurrent chunks

Upload multiple chunks in parallel for speed:

  • Cap parallelism (3–4 chunks max — too many flood the connection)
  • Track which chunks are in flight
  • Retry failures

Net result: significantly faster than serial upload.

Pre-signed URLs (S3 pattern)

For S3 / GCS / Azure Blob direct upload:

  1. Client requests pre-signed upload URL from your server
  2. Server generates signed URL with expiration
  3. Client uploads directly to S3
  4. Client notifies server of completion

Benefit: your server does not proxy the data. Scales much better.

Validation

Client-side validation is UX only — server must re-validate.

Common validations:

  • File size: file.size
  • MIME type: file.type (sniff first bytes for true detection)
  • Image dimensions: load into Image element first
  • Custom format: parse and validate

Mobile considerations

  • Camera capture: <input type="file" capture="camera" />
  • Photo library access
  • iOS HEIC: server-side conversion or transparent transcoding
  • Cellular data: warn for very large uploads

Error handling

  • Network drop: retry with exponential backoff
  • 4xx: stop, show error
  • 5xx: retry several times
  • Quota exceeded: clear error message

Common mistakes

  • No size limit; user uploads 10GB and hangs
  • No progress; user thinks upload is dead
  • No retry; transient failures kill the upload
  • Client trusts MIME type from filename (.exe renamed .png)
  • Synchronous file reads block the UI

Frequently Asked Questions

Should I use a library like Uppy or build from scratch?

For interview practice, build it. For production, Uppy handles tons of edge cases (transformations, multiple sources, retries) you would otherwise reinvent.

How do I handle a 10GB upload?

Chunked upload, resumable, parallel chunks, pre-signed S3 URLs. Pure single-request upload of 10GB is unwise.

Can I upload to S3 directly without a backend?

You need a backend to issue the pre-signed URL. The actual data flow can bypass your backend.

Scroll to Top