1. The JVM Starts as an Interpreter
When you start your Java application, the JVM initially behaves like a classic interpreter:
- It reads Java bytecode.
- It executes bytecode line by line or instruction by instruction, as needed.
This approach offers an important benefit:
Write Once, Run Anywhere
Your Java bytecode runs on any platform with a JVM implementation — Windows, macOS, Linux, and more.
However, there’s a drawback:
Interpreted execution is slower
Compared to languages like C or C++, where the compiler converts code directly into native machine code, interpreted bytecode naturally runs slower because it requires the JVM to interpret each instruction at runtime.
2. Why Native Compilation Is Faster
Languages such as C generate native executables, which the operating system understands directly without any intermediate interpreter. This makes them extremely fast to execute.
So why doesn’t Java do that from the beginning?
Because:
- Native binaries are platform-dependent, and
- Java’s goal is platform independence.
To solve the performance gap while keeping portability, Java uses an advanced hybrid model: JIT compilation.
3. Introducing Just-In-Time (JIT) Compilation
The JVM continuously monitors the running application and identifies:
- Frequently used methods
- Frequently executed loops
- Hot code paths (called hotspots)
When the JVM detects that a piece of code runs often, it decides:
“This code will run faster if compiled to native machine code.”
At that moment, the JVM performs JIT compilation and produces optimized native code for the hotspot.
After compilation:
- Some parts of your program still run as interpreted bytecode.
- Some parts now run as fast, platform-specific machine code.
This is why Java applications often get faster over time as the JVM learns which code needs optimization.
4. Native Machine Code Is Platform Specific
JIT compilation produces operating-system-specific machine instructions:
- On Windows, the JVM produces native Windows machine code.
- On macOS, it produces native macOS machine code.
These native binaries are not compatible across platforms. Each JVM generates optimized code tailored to its own system.
5. JIT Compilation Happens in the Background
JIT compilation does not interrupt your program. Here’s how:
- The JVM is a multithreaded system.
- A separate background thread performs the compilation.
- Meanwhile, the main execution thread continues interpreting the existing bytecode.
Once compilation is finished:
The JVM seamlessly switches from interpreted bytecode to the optimized native version.
You will not notice this switch — it is completely transparent.
6. Temporary Performance Dip (Rare)
In extremely CPU-heavy applications, you might briefly experience:
- A small temporary drop in performance
- Occurring only while JIT compilation is in progress
However, this is very rare and short-lived.
The long-term performance gain from using optimized native code far outweighs this momentary cost.
7. Code Runs Faster the Longer the Application Runs
Because the JVM continuously profiles your application’s behavior, your long-running application typically becomes:
Progressively faster over time.
Examples:
- A method called thousands of times per minute → JIT-compiled quickly
- A method called once a day → may never be JIT-compiled
The JVM invests effort only in optimizing code that clearly benefits from it.
8. JIT Compilation and Performance Measurement
If you’re benchmarking or comparing two methods, you need to consider:
Are you measuring them before or after JIT compilation?
Some common scenarios:
- Measuring immediately after startup → code still interpreted → slower
- Measuring after warm-up → code JIT-compiled → faster
This difference can cause misleading results if you’re not aware of it.
We’ll revisit this topic later when we discuss proper performance measurement techniques.
9. What Code Can the JVM JIT-Compile?
Although we often think in terms of methods, the JVM is actually more flexible.
Any sequence of bytecode instructions can be JIT-compiled.
From the developer’s perspective, this usually means:
- Methods
- Code blocks
- Loops
- Repeated control-flow paths
All of them can potentially become optimized native code if they are executed frequently.
This is why Java’s performance can approach or often match the speed of traditionally compiled languages while still retaining platform independence.
