What Is gRPC?
gRPC is a high-performance, open-source remote procedure call framework developed by Google. It uses Protocol Buffers (protobuf) as its interface definition language and serialization format, and runs over HTTP/2. gRPC enables strongly-typed, code-generated client/server communication across languages — Go, Java, Python, C++, Node.js, and more all interoperate via the same .proto contract.
Protocol Buffers IDL
A .proto file defines the service contract: message types and RPC methods. Field numbers (not names) are used in the binary wire format, making the schema extensible without breaking existing clients:
syntax = "proto3";
message User {
int64 id = 1;
string name = 2;
string email = 3;
repeated string roles = 4;
}
service UserService {
rpc GetUser (GetUserRequest) returns (User);
rpc ListUsers (ListUsersRequest) returns (stream User);
rpc CreateUser (User) returns (User);
}
The protoc compiler generates client stubs and server interfaces in your target language. You implement the server interface and call the generated client — no manual HTTP request construction or JSON parsing.
Binary Wire Format
Protocol Buffers use a compact binary encoding. Each field in a serialized message is preceded by a tag that encodes both the field number and the wire type:
tag = (field_number << 3) | wire_type
Wire types:
- 0 — Varint: int32, int64, uint32, uint64, sint32, sint64, bool, enum.
- 1 — 64-bit: fixed64, sfixed64, double.
- 2 — Length-delimited: string, bytes, embedded messages, packed repeated fields.
- 5 — 32-bit: fixed32, sfixed32, float.
Fields not present in a message are simply omitted — there is no null marker. This is why protobuf messages are smaller than equivalent JSON: no field names, no quotes, no braces, no nulls.
Varint and Zigzag Encoding
Varint encoding stores integers in a variable number of bytes. Each byte uses 7 bits for data and 1 bit (the MSB) to indicate whether more bytes follow. Small integers (common in practice) take 1–2 bytes; large integers take up to 10 bytes for int64.
The problem: negative integers in two’s complement have a large MSB set, so int32 always encodes as 10 bytes if negative. The solution for signed integers with frequent negative values is zigzag encoding (sint32/sint64): map 0→0, -1→1, 1→2, -2→3… so small absolute values encode compactly regardless of sign.
Backward Compatibility Rules
Protocol Buffers are designed for schema evolution. The rules for safe changes:
- Safe: add new optional fields (old clients ignore them; new clients get the default value if absent).
- Safe: rename fields (names are not in the wire format; only field numbers matter).
- Never do: reuse a field number for a different type. Old clients will misinterpret the bytes.
- Never do: change a field’s wire type (e.g., int32 → string). This is a breaking change.
- Unknown fields: proto3 preserves unknown fields during parse/re-serialize, so a new server can round-trip fields added by a newer client.
gRPC over HTTP/2
gRPC uses HTTP/2 as its transport, gaining significant advantages over HTTP/1.1:
- Binary framing: HTTP/2 messages are binary frames, not text. Lower parsing overhead and no ambiguity.
- Multiplexing: multiple RPC calls share a single TCP connection as independent HTTP/2 streams. There is no head-of-line blocking at the HTTP level (though TCP-level HOL blocking still exists — QUIC/HTTP/3 resolves this).
- Header compression (HPACK): repeated headers (like
content-type: application/grpcand authorization tokens) are compressed, reducing overhead on high-frequency RPCs. - Flow control: HTTP/2 has both connection-level and stream-level flow control, preventing a fast sender from overwhelming a slow receiver.
Four RPC Types
gRPC supports four communication patterns, all defined in the .proto file:
- Unary RPC: client sends one request, server returns one response. The classic request-response pattern.
rpc GetUser(Request) returns (Response) - Server streaming RPC: client sends one request, server returns a stream of responses. Good for large result sets or live event feeds.
rpc ListEvents(Request) returns (stream Event) - Client streaming RPC: client sends a stream of messages, server returns one response. Good for bulk uploads.
rpc UploadChunks(stream Chunk) returns (UploadResult) - Bidirectional streaming RPC: both sides send streams of messages independently. Good for real-time collaboration or chat.
rpc Chat(stream Message) returns (stream Message)
Deadline Propagation and Cancellation
gRPC has first-class support for deadlines. A client sets a deadline on a call; gRPC propagates this deadline to all downstream RPCs in the call chain. If the original deadline expires, all in-flight downstream calls are cancelled automatically.
This prevents the "deadline ignored" problem common in REST APIs, where a timed-out client leaves orphaned work running on the server. In gRPC, when a client cancels or times out, the server context is cancelled and well-written server code stops processing immediately.
Interceptors (Middleware)
gRPC interceptors are middleware applied to every RPC on a client or server. Common uses:
- Authentication: extract and validate JWT or API key from metadata on every inbound call.
- Logging: record method name, latency, and status code for every RPC.
- Retry logic: client-side interceptor that retries idempotent calls on transient failures with exponential backoff.
- Tracing: inject and propagate distributed trace IDs (OpenTelemetry, Jaeger) through gRPC metadata.
Interceptors are composable — you chain multiple interceptors. Libraries like go-grpc-middleware provide a suite of production-ready interceptors.
gRPC-Web and Browser Support
Browsers cannot make native HTTP/2 requests with the required framing for gRPC. gRPC-Web solves this by using a proxy (typically Envoy or grpc-web-proxy) that translates between the browser’s HTTP/1.1 or Fetch API calls and the backend’s native gRPC over HTTP/2.
gRPC-Web supports unary and server-streaming RPCs but not full bidirectional streaming (browser fetch API limitations). For bidirectional streaming from a browser, WebSockets or WebTransport are alternatives.
gRPC vs REST/JSON
When choosing between gRPC and REST:
- Schema enforcement: gRPC requires a
.protocontract; REST is schema-optional (OpenAPI helps but is not enforced at the transport layer). - Code generation: gRPC generates type-safe client/server code; REST clients are often hand-rolled or generated from OpenAPI with varying quality.
- Payload size: protobuf binary is typically 3–10x smaller than equivalent JSON. Matters at high RPC rates or on constrained networks.
- Streaming: gRPC has native streaming; REST requires SSE, chunked transfer, or WebSockets.
- Browser support: REST over HTTP/1.1 works natively everywhere; gRPC requires HTTP/2 and a proxy for browsers.
- Debugging: JSON is human-readable; protobuf binary requires tooling (
grpcurl, server reflection) to inspect.
gRPC is the default choice for internal microservice communication. REST remains the default for public APIs where broad client compatibility and human readability matter.
{ “@context”: “https://schema.org”, “@type”: “FAQPage”, “mainEntity”: [ { “@type”: “Question”, “name”: “How are Protocol Buffer fields encoded in binary format?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “Each field is encoded as a tag-value pair. The tag combines the field number (from the .proto file) and wire type using the formula: tag = (field_number << 3) | wire_type. Wire types: 0=varint, 1=64-bit fixed, 2=length-delimited (strings, bytes, embedded messages), 5=32-bit fixed. Varints use 7 bits per byte with the MSB indicating continuation. This encoding is extremely compact — a field with number 1 and wire type 2 encodes in a single byte." } }, { "@type": "Question", "name": "How does gRPC handle streaming RPCs?", "acceptedAnswer": { "@type": "Answer", "text": "gRPC supports four RPC types over HTTP/2 streams. Unary: single request, single response. Server streaming: single request, stream of responses. Client streaming: stream of requests, single response. Bidirectional streaming: both sides send streams independently. All streaming uses HTTP/2 DATA frames. The server signals stream completion with a trailers frame (HTTP/2 HEADERS with END_STREAM). gRPC status codes (OK, CANCELLED, UNAVAILABLE) are sent as trailers." } }, { "@type": "Question", "name": "What is gRPC deadline propagation?", "acceptedAnswer": { "@type": "Answer", "text": "Each gRPC call carries a deadline (absolute time, not timeout) in the grpc-timeout header. When a service receives a request with a deadline, it should pass a reduced deadline to any downstream calls (context.WithDeadline in Go). If the deadline expires, the call is automatically cancelled. This ensures that cascading timeouts do not cause unbounded latency chains. The deadline is a client-specified maximum — servers should respect it and abort work early if exceeded." } }, { "@type": "Question", "name": "How does gRPC compare to REST for microservice communication?", "acceptedAnswer": { "@type": "Answer", "text": "gRPC advantages: strongly typed contracts via .proto IDL, code generation for all languages, binary Protocol Buffers encoding (5-10x smaller than JSON), native streaming support, built-in deadlines and cancellation, HTTP/2 multiplexing. REST advantages: human-readable JSON, browser-native (no gRPC-Web proxy needed), simpler debugging with curl, wider ecosystem tooling. gRPC is preferred for internal service-to-service communication; REST for public APIs where developer experience matters." } }, { "@type": "Question", "name": "What is Protocol Buffer backward compatibility?", "acceptedAnswer": { "@type": "Answer", "text": "Backward compatibility rules: never change a field number (wire format depends on it), never change a field type incompatibly, only add new fields as optional, never remove a required field (proto2), never reuse a field number (old clients may still send it). Unknown fields are preserved by proto3 parsers, enabling zero-downtime rolling deployments where clients and servers can run different proto versions simultaneously. Reserved field numbers prevent accidental reuse of deleted fields." } } ] }Frequently Asked Questions
How are Protocol Buffer fields encoded in binary format?
Each field is encoded as a tag-value pair. The tag combines the field number (from the .proto file) and wire type using the formula: tag = (field_number << 3) | wire_type. Wire types: 0=varint, 1=64-bit fixed, 2=length-delimited (strings, bytes, embedded messages), 5=32-bit fixed. Varints use 7 bits per byte with the MSB indicating continuation. This encoding is extremely compact — a field with number 1 and wire type 2 encodes in a single byte.
How does gRPC handle streaming RPCs?
gRPC supports four RPC types over HTTP/2 streams. Unary: single request, single response. Server streaming: single request, stream of responses. Client streaming: stream of requests, single response. Bidirectional streaming: both sides send streams independently. All streaming uses HTTP/2 DATA frames. The server signals stream completion with a trailers frame (HTTP/2 HEADERS with END_STREAM). gRPC status codes (OK, CANCELLED, UNAVAILABLE) are sent as trailers.
What is gRPC deadline propagation?
Each gRPC call carries a deadline (absolute time, not timeout) in the grpc-timeout header. When a service receives a request with a deadline, it should pass a reduced deadline to any downstream calls (context.WithDeadline in Go). If the deadline expires, the call is automatically cancelled. This ensures that cascading timeouts do not cause unbounded latency chains. The deadline is a client-specified maximum — servers should respect it and abort work early if exceeded.
How does gRPC compare to REST for microservice communication?
gRPC advantages: strongly typed contracts via .proto IDL, code generation for all languages, binary Protocol Buffers encoding (5-10x smaller than JSON), native streaming support, built-in deadlines and cancellation, HTTP/2 multiplexing. REST advantages: human-readable JSON, browser-native (no gRPC-Web proxy needed), simpler debugging with curl, wider ecosystem tooling. gRPC is preferred for internal service-to-service communication; REST for public APIs where developer experience matters.
What is Protocol Buffer backward compatibility?
Backward compatibility rules: never change a field number (wire format depends on it), never change a field type incompatibly, only add new fields as optional, never remove a required field (proto2), never reuse a field number (old clients may still send it). Unknown fields are preserved by proto3 parsers, enabling zero-downtime rolling deployments where clients and servers can run different proto versions simultaneously. Reserved field numbers prevent accidental reuse of deleted fields.
See also: Meta Interview Guide 2026: Facebook, Instagram, WhatsApp Engineering
See also: Scale AI Interview Guide 2026: Data Infrastructure, RLHF Pipelines, and ML Engineering
See also: Databricks Interview Guide 2026: Spark Internals, Delta Lake, and Lakehouse Architecture