Senior Java interviews at Google, Amazon, LinkedIn, and enterprise shops go deep on JVM internals, concurrency, and the language semantics that distinguish expert Java engineers. These are the questions that expose whether a candidate truly understands the platform or just uses it.
1. JVM Memory Areas
The JVM divides memory into distinct areas:
- Heap: objects live here. Divided into Young Generation (Eden + two Survivor spaces) and Old Generation (Tenured). GC focuses on the heap.
- Method Area (Metaspace): class metadata, bytecode, static variables. In modern JVMs (JDK 8+), Metaspace is native memory (not heap) — no PermGen OutOfMemoryError.
- Stack: each thread has its own stack storing stack frames (local variables, operand stack, method references). Stack frames are created on method call, destroyed on return.
- PC Register: each thread has a program counter tracking the current bytecode instruction.
- Native Method Stack: for JNI native methods.
2. Garbage Collection: G1 and ZGC
G1 (Garbage First): the default GC since JDK 9. Divides the heap into equal-sized regions (1-32MB each) instead of fixed generations. Prioritizes collecting regions with the most garbage first (hence “Garbage First”). Target: configurable pause goal (-XX:MaxGCPauseMillis=200). Good for heaps 4GB-50GB.
ZGC: low-latency GC (JDK 15+). Sub-millisecond pause times regardless of heap size. Achieves this by doing most GC work concurrently with the application (colored pointers, load barriers). Max pause: ~1ms even on 1TB heaps. Trade-off: higher CPU overhead (10-20%) compared to G1. Use for latency-sensitive services.
// JVM flags for ZGC
-XX:+UseZGC -XX:MaxGCPauseMillis=1 -Xmx32g
3. Java Memory Model and volatile
The Java Memory Model (JMM) defines when writes by one thread are visible to another. Without synchronization, the JVM and CPU can reorder instructions for optimization. The happens-before relationship guarantees ordering: a synchronized block exit happens-before a subsequent synchronized block entry on the same lock. A volatile write happens-before a volatile read of the same variable.
// Double-checked locking (broken without volatile)
class Singleton {
private static volatile Singleton instance; // volatile required!
public static Singleton getInstance() {
if (instance == null) {
synchronized (Singleton.class) {
if (instance == null) {
instance = new Singleton();
}
}
}
return instance;
}
}
Without volatile, another thread can see a partially constructed Singleton: the reference is non-null (the assignment completed) but the object fields are not yet written (the constructor writes were reordered after the assignment).
4. Java Concurrency Utilities
// ReentrantLock: more flexible than synchronized
ReentrantLock lock = new ReentrantLock();
lock.lock();
try {
// critical section
} finally {
lock.unlock(); // always release in finally
}
// ReadWriteLock: multiple readers, one writer
ReadWriteLock rwLock = new ReentrantReadWriteLock();
rwLock.readLock().lock(); // allows concurrent reads
rwLock.writeLock().lock(); // exclusive write
// CountDownLatch: wait for N events
CountDownLatch latch = new CountDownLatch(3);
latch.countDown(); // called by each worker
latch.await(); // blocks until count reaches 0
// CompletableFuture: async composition
CompletableFuture.supplyAsync(() -> fetchUser(id))
.thenCompose(user -> fetchOrders(user.id))
.thenApply(orders -> generateReport(orders))
.exceptionally(ex -> fallbackReport());
5. Virtual Threads (Project Loom, Java 21)
Platform threads (OS threads) are expensive: 1MB stack, OS context switch overhead. A JVM with 10K platform threads under load spends significant CPU on context switching. Virtual threads are lightweight JVM-managed threads: millions can exist concurrently. When a virtual thread blocks on I/O, the JVM unmounts it from the carrier (OS) thread and mounts a different virtual thread — the OS thread stays busy.
// Java 21: create 100,000 virtual threads
try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
IntStream.range(0, 100_000).forEach(i ->
executor.submit(() -> {
Thread.sleep(1000); // blocks virtual thread, not OS thread
return i;
})
);
}
Virtual threads make the simple blocking style scalable — no need for reactive/async programming for I/O-bound workloads. Structured concurrency (JEP 453) adds scoped task management on top of virtual threads.
6. Generics and Type Erasure
Java generics use type erasure: generic type parameters are replaced with Object (or their upper bound) at compile time. The bytecode has no generic type information at runtime — only the source code does. Implications:
// Cannot create generic arrays (type erasure reason)
List<String>[] array = new List[10]; // unchecked warning
// Cannot use instanceof with generic type
if (obj instanceof List<String>) { } // compile error
// PECS: Producer Extends, Consumer Super
void copy(List<? extends Number> src, // producer: reads Numbers
List<? super Number> dst) { // consumer: writes Numbers
for (Number n : src) dst.add(n);
}
// List<Integer> works as src (Integer extends Number)
// List<Object> works as dst (Object super Number)
7. equals() and hashCode() contract
If two objects are equal (a.equals(b) == true), they MUST have the same hashCode. The converse is not required (hash collisions are allowed). Violating this breaks HashMap and HashSet behavior: an object can be put into a HashMap but not found when the hashCode or equals logic is inconsistent. Always override both together. Use Objects.hash() and Objects.equals() for null-safe, concise implementations.
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (!(o instanceof Point)) return false;
Point p = (Point) o;
return x == p.x && y == p.y;
}
@Override
public int hashCode() {
return Objects.hash(x, y);
}
8. Common Design Patterns in Java
Builder: construct complex objects step by step (avoid telescoping constructors). Common in Lombok @Builder, protobuf, and Retrofit.
Factory Method: create objects without specifying the exact class. Enables dependency injection and testability.
Strategy: define a family of algorithms behind an interface and swap them at runtime. Java Comparator is the canonical example.
Observer: one-to-many dependency — when one object changes, all dependents are notified. Java event listeners, Spring ApplicationEvent.
Template Method: define the skeleton of an algorithm in a base class, letting subclasses override specific steps without changing the structure. Spring JdbcTemplate is a classic example.
9. String interning and StringBuilder
// String pool (intern)
String a = "hello";
String b = "hello";
a == b; // true — same interned reference
String c = new String("hello");
a == c; // false — different objects
a.equals(c); // true
// String concatenation in loops — avoid
String result = "";
for (int i = 0; i < 10000; i++) {
result += i; // creates 10000 new String objects
}
// Use StringBuilder — O(n) instead of O(n^2)
StringBuilder sb = new StringBuilder();
for (int i = 0; i < 10000; i++) {
sb.append(i);
}
String result = sb.toString();
Frequently Asked Questions
What is the difference between G1GC and ZGC in Java?
G1 (Garbage First) GC is the default since JDK 9. It divides the heap into equal-sized regions and prioritizes collecting regions with the most garbage, targeting a configurable pause goal (default 200ms). G1 works well for heaps of 4GB-50GB and balances throughput with pause predictability. ZGC (Z Garbage Collector), production-ready since JDK 15, achieves sub-millisecond pause times regardless of heap size by doing almost all GC work concurrently with the application. It uses colored pointers and load barriers to track object relocations without stopping the world. ZGC pauses are typically under 1ms even on terabyte heaps. The tradeoff: ZGC uses 10-20% more CPU than G1 due to concurrent GC work. Choose G1 for general-purpose applications; choose ZGC for latency-sensitive services where GC pauses above 1ms are unacceptable (trading systems, real-time APIs, low-latency microservices).
How do Java virtual threads (Project Loom) differ from platform threads?
Platform threads are OS threads: creating one allocates approximately 1MB of stack memory and requires OS scheduler involvement for context switching. A JVM typically supports 1,000-10,000 platform threads before performance degrades due to memory pressure and scheduling overhead. Virtual threads are JVM-managed lightweight threads introduced in Java 21. They run on top of a small pool of carrier (platform) threads. When a virtual thread blocks on I/O (network call, file read, database query), the JVM unmounts it from the carrier thread — the carrier thread is immediately available to run another virtual thread. This allows millions of virtual threads to coexist. For I/O-bound workloads, virtual threads provide the throughput of async/reactive code while retaining the simple, readable sequential programming model. Virtual threads are not faster for CPU-bound work (they still require CPU time); they excel at concurrency where the bottleneck is I/O wait.
Why must hashCode() and equals() be consistent in Java?
HashMap and HashSet use a two-step lookup: first compute hashCode() to find the bucket, then use equals() to find the exact entry within the bucket. If two equal objects have different hashCodes, they will be placed in different buckets. When you look up an object, HashMap computes its hashCode, finds the bucket, and searches only that bucket — it will never find the entry in the other bucket even if equals() would return true. The contract: if a.equals(b) is true, then a.hashCode() must equal b.hashCode(). The reverse is not required (hash collisions are fine). Common mistake: overriding equals() without overriding hashCode() — the default hashCode() is based on object identity (memory address), so two logically equal objects with the same field values will have different hashCodes and cannot be found in a HashMap. Use Objects.hash(field1, field2, …) to generate a consistent hashCode based on the same fields used in equals().
{
“@context”: “https://schema.org”,
“@type”: “FAQPage”,
“mainEntity”: [
{
“@type”: “Question”,
“name”: “What is the difference between G1GC and ZGC in Java?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “G1 (Garbage First) GC is the default since JDK 9. It divides the heap into equal-sized regions and prioritizes collecting regions with the most garbage, targeting a configurable pause goal (default 200ms). G1 works well for heaps of 4GB-50GB and balances throughput with pause predictability. ZGC (Z Garbage Collector), production-ready since JDK 15, achieves sub-millisecond pause times regardless of heap size by doing almost all GC work concurrently with the application. It uses colored pointers and load barriers to track object relocations without stopping the world. ZGC pauses are typically under 1ms even on terabyte heaps. The tradeoff: ZGC uses 10-20% more CPU than G1 due to concurrent GC work. Choose G1 for general-purpose applications; choose ZGC for latency-sensitive services where GC pauses above 1ms are unacceptable (trading systems, real-time APIs, low-latency microservices).”
}
},
{
“@type”: “Question”,
“name”: “How do Java virtual threads (Project Loom) differ from platform threads?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Platform threads are OS threads: creating one allocates approximately 1MB of stack memory and requires OS scheduler involvement for context switching. A JVM typically supports 1,000-10,000 platform threads before performance degrades due to memory pressure and scheduling overhead. Virtual threads are JVM-managed lightweight threads introduced in Java 21. They run on top of a small pool of carrier (platform) threads. When a virtual thread blocks on I/O (network call, file read, database query), the JVM unmounts it from the carrier thread — the carrier thread is immediately available to run another virtual thread. This allows millions of virtual threads to coexist. For I/O-bound workloads, virtual threads provide the throughput of async/reactive code while retaining the simple, readable sequential programming model. Virtual threads are not faster for CPU-bound work (they still require CPU time); they excel at concurrency where the bottleneck is I/O wait.”
}
},
{
“@type”: “Question”,
“name”: “Why must hashCode() and equals() be consistent in Java?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “HashMap and HashSet use a two-step lookup: first compute hashCode() to find the bucket, then use equals() to find the exact entry within the bucket. If two equal objects have different hashCodes, they will be placed in different buckets. When you look up an object, HashMap computes its hashCode, finds the bucket, and searches only that bucket — it will never find the entry in the other bucket even if equals() would return true. The contract: if a.equals(b) is true, then a.hashCode() must equal b.hashCode(). The reverse is not required (hash collisions are fine). Common mistake: overriding equals() without overriding hashCode() — the default hashCode() is based on object identity (memory address), so two logically equal objects with the same field values will have different hashCodes and cannot be found in a HashMap. Use Objects.hash(field1, field2, …) to generate a consistent hashCode based on the same fields used in equals().”
}
}
]
}