Java remains the dominant language for enterprise backend systems, powering services at Google, Amazon, Netflix, and most Fortune 500 companies. This guide covers the Java-specific questions tested at senior engineering interviews — from modern Java features (virtual threads, records) to concurrency, garbage collection, and Spring Boot internals.
Virtual Threads (Project Loom, Java 21+)
Virtual threads are lightweight threads managed by the JVM, not the OS. Creating a virtual thread: Thread.startVirtualThread(() -> { doWork(); }). Or with ExecutorService: var executor = Executors.newVirtualThreadPerTaskExecutor(); executor.submit(() -> { doWork(); }). Why virtual threads matter: OS threads are expensive (1-2 MB stack each). A server handling 10,000 concurrent connections needs 10,000 threads = 10-20 GB just for stacks. Virtual threads use ~1 KB each. 10,000 virtual threads = ~10 MB. You can create millions of virtual threads. Programming model: write blocking code (the natural, readable style) and get the performance of async code. A virtual thread that blocks on I/O (database query, HTTP request) is “parked” — the underlying OS thread (carrier thread) is freed to run other virtual threads. When the I/O completes, the virtual thread resumes on any available carrier. No callback hell, no CompletableFuture chains, no reactive streams complexity. Impact on Spring Boot: Spring Boot 3.2+ supports virtual threads. Set spring.threads.virtual.enabled=true. Each request is handled by a virtual thread. The thread-per-request model works again — but now with millions of concurrent requests instead of hundreds. Interview question: “When would you NOT use virtual threads?” Answer: CPU-bound tasks (virtual threads do not help — they share carrier threads, and CPU work cannot be “parked”). Use platform threads or ForkJoinPool for CPU-bound parallelism. Also: code that uses synchronized blocks extensively (pinning — the virtual thread holds the carrier thread during synchronized).
Records, Sealed Classes, and Pattern Matching
Records (Java 16+): immutable data carriers with auto-generated constructors, getters, equals, hashCode, and toString. record Point(double x, double y) {}. Replaces verbose POJO boilerplate. Records are final (cannot be extended) and their fields are final (immutable). Use for: DTOs, API responses, value objects, and any class that is purely data. Sealed classes (Java 17+): restrict which classes can extend a class. sealed interface Shape permits Circle, Rectangle, Triangle {}. Only the permitted subclasses can implement Shape. Combined with pattern matching: enables exhaustive switch expressions. Pattern matching (Java 21+): switch with type patterns. Object obj = getShape(); String desc = switch (obj) { case Circle c -> “circle r=” + c.radius(); case Rectangle r -> “rect ” + r.width() + “x” + r.height(); case Triangle t -> “triangle”; }. The compiler verifies exhaustiveness (all permitted subtypes are handled). No default needed for sealed types. This replaces the visitor pattern and instanceof chains with cleaner, safer code. Text blocks (Java 15+): multi-line strings with “”” delimiters. String json = “”” {“name”: “Java”, “version”: 21} “””. No escaping quotes, preserves indentation. Interview question: “What problem do sealed classes solve?” Answer: they model closed type hierarchies where you know all subtypes at compile time (shapes, AST nodes, event types). The compiler enforces exhaustive handling in switch expressions.
Stream API and Functional Java
The Stream API (Java 8+) enables functional-style data processing. Pipeline: source -> intermediate operations (filter, map, flatMap, sorted, distinct) -> terminal operation (collect, reduce, forEach, count). Lazy evaluation: intermediate operations are not executed until a terminal operation is called. This enables: short-circuiting (findFirst stops after the first match), fusion (multiple operations in a single pass), and parallelism (parallel streams split work across threads). Key operations: filter(predicate) — keep elements matching the condition. map(function) — transform each element. flatMap(function) — transform each element to a stream, then flatten. collect(Collectors.toList()) — gather results into a list. reduce(identity, accumulator) — aggregate to a single value. groupingBy(classifier) — group elements by a key (like SQL GROUP BY). Parallel streams: list.parallelStream().filter(…).map(…).collect(…). Splits the work across ForkJoinPool threads. Beware: (1) Not always faster (overhead of splitting and merging outweighs benefit for small collections or simple operations). (2) Non-thread-safe collectors cause bugs. (3) Order may not be preserved without explicit ordering. Use parallel streams for: large collections (> 10,000 elements) with computationally expensive operations. Interview question: “What is the difference between map and flatMap?” Answer: map transforms each element to one output. flatMap transforms each element to a stream (zero or more outputs) and flattens the results. Use flatMap for: Optional chaining, one-to-many transformations, and nested collections.
Concurrency: CompletableFuture, ConcurrentHashMap
CompletableFuture: composable async computation. CompletableFuture.supplyAsync(() -> fetchData()) .thenApply(data -> process(data)) .thenAccept(result -> save(result)) .exceptionally(ex -> { log(ex); return null; }). Chain async operations without blocking. Combine multiple futures: CompletableFuture.allOf(f1, f2, f3).thenRun(() -> { /* all complete */ }). With virtual threads, CompletableFuture is less necessary (just write blocking code in virtual threads). But it remains important for: composing async results, timeout handling, and integration with reactive systems. ConcurrentHashMap: thread-safe hash map without global locking. Uses lock striping (locks individual segments, not the entire map). Supports atomic operations: computeIfAbsent, merge, compute. ConcurrentHashMap.newKeySet() provides a concurrent Set. Use instead of Collections.synchronizedMap (which locks the entire map on every operation). AtomicInteger, AtomicLong, AtomicReference: lock-free atomic operations using CAS (Compare-And-Swap). counter.incrementAndGet() is thread-safe without synchronized. Use for: counters, flags, and simple atomic state. Volatile: ensures visibility of variable changes across threads. Without volatile, a thread may cache a variable value and never see updates from other threads. volatile boolean running = true; — changes are immediately visible. Does NOT provide atomicity — use Atomic* for read-modify-write operations.
JVM Garbage Collection
Understanding JVM GC is essential for Java performance interviews: (1) G1 GC (default since Java 9): divides the heap into equal-sized regions. Collects regions with the most garbage first. Configurable pause time target (default 200ms). Best for: general-purpose applications with moderate heap (4-32 GB). (2) ZGC (Java 15+): sub-millisecond pause times regardless of heap size (tested up to 16 TB). Uses colored pointers and load barriers for concurrent marking and compaction. Best for: latency-sensitive services (trading, real-time APIs). (3) Shenandoah: similar goals to ZGC. Available in OpenJDK. Concurrent compaction with Brooks pointers. Heap sizing: set -Xms = -Xmx (equal min and max) to avoid resize overhead. Size: 2-4x the live data set. Too small = frequent GC. Too large = long GC pauses (for non-ZGC collectors). GC logging: -Xlog:gc* writes detailed GC information. Analyze with GCViewer or GCEasy. Look for: pause duration (P99), frequency, and heap occupancy after GC (increasing = memory leak). Interview question: “How would you choose between G1 and ZGC?” Answer: G1 for most applications (good balance of throughput and latency). ZGC when P99 latency spikes from GC pauses are unacceptable (under 1ms pauses). ZGC has slightly lower throughput than G1 (the concurrent work has overhead).
Spring Boot Essentials
Spring Boot is the standard Java web framework. Key concepts: (1) Dependency Injection (DI) — Spring manages object creation and wiring. @Component, @Service, @Repository mark classes for DI. @Autowired injects dependencies (constructor injection preferred). Benefits: testability (inject mocks), flexibility (swap implementations), and decoupling. (2) Spring MVC — @RestController, @GetMapping, @PostMapping for REST APIs. @RequestBody deserializes JSON. @ResponseBody serializes to JSON. @PathVariable and @RequestParam for URL parameters. (3) Spring Data JPA — @Entity classes map to database tables. Repository interfaces auto-generate CRUD queries. Custom queries with @Query and JPQL. (4) Spring Security — authentication (who are you?) and authorization (what can you do?). JWT token validation, OAuth2 integration, and method-level security (@PreAuthorize). (5) Profiles and configuration — application-dev.yml, application-prod.yml. @Profile(“prod”) activates beans only in production. Environment variables override config: SPRING_DATASOURCE_URL. (6) Actuator — production-ready monitoring endpoints: /actuator/health, /actuator/metrics, /actuator/prometheus. Integrates with Prometheus for metrics collection. Interview question: “Explain the Spring bean lifecycle.” Answer: instantiation -> dependency injection -> @PostConstruct -> ready for use -> @PreDestroy -> destruction. Scopes: singleton (default, one instance per context), prototype (new instance per injection), request (one per HTTP request), session (one per HTTP session).
{“@context”:”https://schema.org”,”@type”:”FAQPage”,”mainEntity”:[{“@type”:”Question”,”name”:”What are Java virtual threads and when should you use them?”,”acceptedAnswer”:{“@type”:”Answer”,”text”:”Virtual threads (Java 21+, Project Loom) are lightweight threads managed by the JVM. OS threads use 1-2 MB stack each; virtual threads use ~1 KB. You can create millions of virtual threads. When a virtual thread blocks on I/O, its carrier OS thread is freed to run other virtual threads. No callback hell or CompletableFuture chains needed — write simple blocking code and get async performance. Spring Boot 3.2+: set spring.threads.virtual.enabled=true for thread-per-request with millions of concurrent requests. When NOT to use: CPU-bound tasks (virtual threads share carrier threads — CPU work cannot be parked), code with extensive synchronized blocks (pins the carrier thread), and when you need thread-local state (virtual threads may switch carriers). For CPU parallelism: use ForkJoinPool or platform threads.”}},{“@type”:”Question”,”name”:”How do you choose between G1 and ZGC for JVM garbage collection?”,”acceptedAnswer”:{“@type”:”Answer”,”text”:”G1 GC (default since Java 9): divides heap into regions, collects garbage-first regions. Configurable pause target (default 200ms). Best for: general applications with moderate heap (4-32 GB). Good throughput-latency balance. ZGC (Java 15+): sub-1ms pauses regardless of heap size (tested to 16 TB). Uses colored pointers and load barriers for concurrent marking/compaction. Best for: latency-sensitive services where GC pause spikes are unacceptable (trading systems, real-time APIs, interactive services). Slightly lower throughput than G1 (concurrent work overhead). Heap sizing for both: set -Xms = -Xmx (avoid resize). Size = 2-4x live data. Too small = frequent GC. Too large = long pauses (G1) or wasted memory. Enable GC logging (-Xlog:gc*) and analyze with GCViewer. Key metrics: P99 pause duration and heap occupancy trend after GC.”}}]}