1. Introduction
Java applications often run on multi-core, multi-threaded systems, where multiple threads can read and write shared variables concurrently. While this enables powerful parallelism, it introduces serious challenges around memory visibility, instruction ordering, and data consistency.
To address this, Java introduced the Java Memory Model (JMM) in Java 1.5 (JSR-133) as part of the language specification. The JMM defines how threads interact through memory and ensures consistent behavior across all platforms and hardware, despite differences in CPU architecture or compiler optimizations.
2. Why is the Java Memory Model (JMM) Necessary?
Modern CPUs and compilers use several techniques to optimize performance:
- Caching values in registers or CPU-local caches
- Reordering instructions (both by compilers and CPUs)
- Delaying writes or buffering reads for optimization
These optimizations can cause unexpected behavior in multi-threaded programs, such as:
- A thread reading stale values
- Reordered instructions that violate logical dependencies
- Updates by one thread never becoming visible to another
Example Problem Without JMM
// Thread 1
ready = true;
value = 42;
// Thread 2
if (ready) {
System.out.println(value); // May print 0 if write is not visible
}
Even though value = 42 comes after ready = true in the code, CPU or compiler optimizations can reorder them. Without a proper memory model, Thread 2 may see ready == true and still read an outdated value of value (i.e., 0).
The JMM provides rules to prevent such surprises and ensure predictable, consistent multithreaded behavior.
3. Key Concepts of the Java Memory Model
3.1 Main Memory vs Working Memory (Thread-local Caches)
The Java Memory Model is built around the idea that:
- Main Memory: Shared heap memory that all threads can access.
- Working Memory: Each thread can cache variables (in CPU registers or CPU caches) for better performance.
Reads and writes may not happen directly to main memory:
- A thread may read a variable once and reuse the cached value multiple times.
- A thread may write to a variable, but the change may stay in local cache for some time and not be visible to other threads.
This is the root cause of visibility problems in concurrent programming.
3.2 The Happens-Before Relationship
The happens-before relationship is the foundation of JMM.
If Action A happens-before Action B, then:
- All effects of Action A (such as writing a variable) are visible to Action B.
- All operations are ordered such that A completes before B starts.
It’s not about real-time, but about program order semantics.
4. Rules That Establish Happens-Before Relationships
Java defines several rules to determine happens-before relationships.
| Rule | Meaning |
|---|---|
| Program Order Rule | Within a single thread, each action happens-before those that come later. |
| Monitor Lock Rule | An unlock (synchronized exit) happens-before every subsequent lock (synchronized entry) on that same object. |
| Volatile Variable Rule | A write to a volatile variable happens-before every subsequent read of that variable. |
| Thread Start Rule | A call to Thread.start() happens-before any actions in the started thread. |
| Thread Join Rule | The end of a thread (termination) happens-before another thread successfully returns from Thread.join(). |
| Finalizer Rule | The end of a constructor for an object happens-before the start of its finalizer. |
| Transitivity | If A happens-before B, and B happens-before C, then A happens-before C. |
5. Visibility Guarantees in JMM
The Problem: Invisible Writes
Consider:
class Shared {
boolean flag = false;
void writer() {
flag = true;
}
void reader() {
if (flag) {
// do something
}
}
}
Here, reader() might never see the updated value of flag, even if writer() sets it to true. This is due to caching — the reader thread might be seeing an old value of flag from its CPU cache.
JMM Solution: Memory Barriers
Java uses memory barriers (also known as fences) to enforce visibility:
- Volatile reads/writes,
synchronizedblocks, and thread lifecycle methods (e.g.,start(),join()) all insert memory barriers. - These barriers ensure data is flushed to or fetched from main memory, preventing stale reads or lost writes.
6. Ordering Guarantees in JMM
6.1 The Problem: Instruction Reordering
Java allows the compiler and the CPU to reorder instructions for optimization as long as single-threaded semantics are preserved. But in multi-threaded code, this can cause bugs.
Example
// Thread 1
a = 1;
flag = true;
// Thread 2
if (flag) {
System.out.println(a); // May print 0 if writes are reordered
}
If the JVM or CPU reorders a = 1 and flag = true, then Thread 2 might see flag == true but a == 0.
6.2 JMM Solution: Prevent Reordering Across Synchronization Boundaries
JMM uses happens-before rules and memory barriers to prevent reordering across:
volatilevariablessynchronizedblocks- Thread lifecycle methods
So if flag is declared volatile, the write to a will happen-before the write to flag, and Thread 2 will see both changes correctly.
7. Volatile Keyword and JMM
7.1 How Volatile Ensures Visibility and Ordering
A volatile variable has two key guarantees:
- Visibility: When a thread writes to a volatile variable, the value is immediately flushed to main memory and visible to all threads.
- Ordering: Volatile variables create happens-before relationships:
- A write to a volatile variable happens-before a subsequent read of that same variable.
- Operations before a volatile write cannot be reordered after it.
- Operations after a volatile read cannot be reordered before it.
When to Use Volatile
Use volatile when:
- A variable is read and written by multiple threads.
- Atomicity is not required (volatile is not atomic for compound actions).
- Example Use Case: State flags, stop signals.
volatile boolean running = true;
8. Synchronized and the JMM
What synchronized Does
synchronized ensures:
- Mutual exclusion: Only one thread can access the block at a time.
- Visibility: When a thread enters a synchronized block, it flushes its working memory.
- Ordering: Acquiring and releasing the lock creates a happens-before relationship.
Example
synchronized(lock) {
// operations
}
- Entering the block: Load latest values from main memory.
- Exiting the block: Flush changes to main memory.
- Any thread acquiring the lock sees the latest updates made by the previous thread.
Comparison: volatile vs synchronized
| Feature | volatile | synchronized |
|---|---|---|
| Visibility | Yes | Yes |
| Atomicity | No | Yes |
| Blocking | No | Yes |
| Performance | High | Slower due to locking |
| Use Case | Flags, state vars | Critical sections, compound ops |
9. JMM and CPU Architecture
The JMM abstracts away the differences between CPU memory models. On different architectures (like x86, ARM, POWER), the JVM inserts platform-specific memory barriers to ensure that JMM rules are followed.
| Java Operation | x86 Barrier Instructions | ARM/POWER Instructions |
|---|---|---|
| volatile write | StoreStore + StoreLoad | DMB (Data Memory Barrier) |
| volatile read | LoadLoad + LoadStore | DMB |
| synchronized block | LOCK-prefixed instructions | DMB, ISB |
This ensures your Java code behaves the same whether it’s running on Intel or ARM CPUs.
