A connection pool maintains a set of pre-established connections to a resource (database, HTTP service, message broker) that can be reused across requests, avoiding the overhead of establishing a new connection for each operation. Establishing a TCP connection + TLS handshake + database authentication takes 50-200ms. With connection pooling, that cost is paid once at startup, and subsequent requests borrow a connection in microseconds. PgBouncer, HikariCP, and connection2 are production connection poolers used at scale. Understanding connection pool design is essential for any system with a database or external service dependency.
Connection Pool Internals
A connection pool manages a fixed set of connections: idle connections waiting to be borrowed, and active connections in use by requests. Key parameters: min_connections: minimum connections to keep open (warm pool, no cold starts). max_connections: maximum connections (prevents overwhelming the database with too many concurrent connections). acquire_timeout: maximum time to wait for an available connection before throwing an error. max_lifetime: maximum time a connection stays in the pool before being replaced (prevents stale connections from server-side timeout). idle_timeout: how long an idle connection stays before being closed (reduces server-side connection pressure during off-peak hours). Pool size guideline: database can handle C concurrent connections; set max_connections = C / number_of_app_server_instances.
// Go database connection pool configuration
import "database/sql"
db, err := sql.Open("postgres", dsn)
// Pool sizing: PostgreSQL default max_connections = 100
// With 4 app server instances: 100/4 = 25 connections per instance
db.SetMaxOpenConns(25) // max active connections
db.SetMaxIdleConns(10) // keep 10 idle connections warm
db.SetConnMaxLifetime(30 * time.Minute) // replace connections older than 30m
db.SetConnMaxIdleTime(5 * time.Minute) // close idle connections after 5m
// HikariCP (Java) equivalent:
// hikari.maximumPoolSize=25
// hikari.minimumIdle=10
// hikari.connectionTimeout=30000 (30s acquire timeout)
// hikari.idleTimeout=600000 (10m idle timeout)
// hikari.maxLifetime=1800000 (30m max lifetime)
// PgBouncer: proxy-level connection pooler
// Sits between app servers and PostgreSQL
// transaction mode: one PostgreSQL connection per transaction (most efficient)
// session mode: one PostgreSQL connection per client session (compatible with all features)
// [pgbouncer]
// pool_mode = transaction
// max_client_conn = 1000 -- app servers can open 1000 connections to PgBouncer
// default_pool_size = 25 -- PgBouncer opens only 25 real PostgreSQL connections
Connection Pool Sizing and Saturation
Setting max_connections too high: database server runs out of memory (each PostgreSQL connection uses ~5-10MB) and CPU context switches between connections become a bottleneck. The classic mistake is setting max_connections = 1000 per app server — 10 app servers overwhelm Postgres with 10,000 connections. Setting max_connections too low: requests queue waiting for a free connection (pool saturation), increasing response latency. Detecting saturation: monitor pool wait time (P99 time to acquire connection), active vs. idle connection ratio, and connection error rate. HikariCP exposes JMX metrics. Use PgBouncer in transaction mode between app servers and PostgreSQL to multiplex thousands of app connections over a small PostgreSQL connection pool.
Connection Health Checking
A pooled connection can become invalid: database server restarted, network timeout caused a silent disconnect, or the server closed the connection due to idle timeout. Validation strategies: Test on borrow: execute a lightweight query (SELECT 1) before returning the connection to the caller. Catches invalid connections but adds latency on every borrow. Test on return: validate when connection is returned to pool. Background validation: a pool maintenance thread periodically runs SELECT 1 on idle connections and removes failures. Error-based eviction: when a query fails with a connection error, remove the connection from pool and retry on a new connection. HikariCP uses connection testing with keepalive queries by default (every 30 seconds for idle connections).
Key Interview Discussion Points
- Thread-per-request vs async: traditional thread-per-request models block threads waiting for connections (pool saturation visible as thread saturation); async models (Node.js, Java Reactor, Go goroutines) allow thousands of logical requests to share a small connection pool without blocking threads
- PgBouncer transaction mode limitations: PostgreSQL features that require session state (prepared statements, advisory locks, SET LOCAL, LISTEN/NOTIFY) break under transaction-mode connection pooling because the same PostgreSQL session is not guaranteed across transactions
- Connection pool per database: maintain separate pools for read replicas and the primary — route read-only queries to the replica pool, writes to the primary pool; replica pool can be sized for high read throughput
- Circuit breaker integration: when the database is overloaded, rapidly failing connections should trigger a circuit breaker that rejects new requests with a 503 immediately rather than queueing indefinitely and exhausting the connection pool
- Serverless and connection pools: serverless functions create new processes per request, each opening connections directly — use a proxy connection pooler (RDS Proxy, PgBouncer, Neon) to avoid overwhelming the database with thousands of short-lived connections