Learnitweb

JVM Just-In-Time Compilation (JIT)

Java’s performance story is often misunderstood. Many people hear:

“Java is interpreted, so it must be slow.”

But the truth is far more interesting.

The JVM does not behave like a simple interpreter. It is a sophisticated, intelligent, adaptive execution engine capable of learning how your program behaves and optimizing itself continuously.

1. The Initial Phase: JVM as an Interpreter

When your application launches, the JVM behaves like a traditional interpreter:

  1. Read a bytecode instruction
  2. Decode what operation it represents
  3. Execute that operation
  4. Move to the next instruction

This is fast enough, but not as fast as native machine code. Interpreters always incur overhead because every instruction needs to be examined at runtime.

Here is a simplified textual diagram:

+----------------------+         +---------------------------+
|      Bytecode        | --->    | JVM Interpreter Engine    |
+----------------------+         +---------------------------+
                                          |
                                          v
                               [Execute instruction]

This design gives Java portability, but initially, not maximum performance. This leads us to the comparison with native compiled languages.

2. Why Native Code (C/C++) Runs Faster Initially

Languages like C and C++:

  • Compile directly into native machine code
  • Produce binaries like .exe, .out, .dll
  • Are executed by the OS without any interpreter

A textual diagram:

C/C++ Source → Compiler → Native Machine Code → Executed by CPU directly

This approach produces very fast execution but zero portability. A Windows executable cannot run on macOS. A macOS executable cannot run on Linux.

Java solves this problem through bytecode, but bytecode can’t initially match the speed of native CPU instructions.

So the JVM needs a more advanced strategy.

3. JVM’s Big Idea: Combine Interpretation + Native Compilation

The JVM designers introduced a feature that revolutionized performance:

Just-In-Time Compilation (JIT)

JIT allows Java to achieve:

  • Portability (through bytecode)
  • Speed (through dynamic native compilation)

The JVM monitors your program as it runs and looks for:

  • Highly used methods
  • Frequently executed loops
  • Code paths that run thousands or millions of times

These are called hot spots. (Modern JVMs even have an entire subsystem named HotSpot, after this idea.)

When the JVM detects hot spots, it decides:

“This bytecode runs too often to keep interpreting.
Let me compile it to native machine code.”

4. How the JVM Identifies Hot Code

The JVM maintains counters internally:

  • How many times a method has been called
  • How many loop iterations were executed
  • How often a branch (if/else) is taken
  • How frequently an inline candidate appears

A simplified textual diagram:

                 +------------------------------------+
                 |   JVM Runtime Profiling Engine      |
                 +------------------------------------+
                                |
         --------------------------------------------------------
         |                |                  |                |
 Tracks: Method Calls   Loop Counts     Branch Frequency   Type Usage

Based on this profiling, if a method becomes “hot,” the JVM adds it to a queue of methods that should be JIT-compiled.

5. JIT Compilation: Bytecode → Native Machine Code

When a method is selected for JIT compilation, the JVM spawns a separate background thread to perform the compilation.

This prevents the main execution thread from pausing.

  • Your program keeps running normally
  • The JIT compiler works in the background
  • No interruption to the user

Once JIT completes, the JVM discards the interpreted version and switches seamlessly to the native version.

What does “native” mean?

Native instructions depend on your OS and CPU:

PlatformJIT Output
WindowsWindows-specific machine code
macOSmacOS ARM/x86 native code
LinuxLinux native ELF instructions

The JVM internally stores and uses this optimized version for subsequent executions.

A textual diagram showing the pipeline:

            +-----------------------+
Bytecode -> | JVM Interpreter       |
            +-----------------------+
                       |
                       | (detect hot method)
                       v
            +-----------------------+
            | JIT Compiler Thread   |
            +-----------------------+
                       |
                       v
          +--------------------------------+
          | Native Machine Code Generated   |
          +--------------------------------+
                       |
                       v
            JVM switches to native execution

6. The Mixed Execution Model

After JIT kicks in, your program is running in two modes at the same time:

  1. Interpreted Mode — for cold or rarely used code
  2. Compiled Mode — for hot, heavily used code

This hybrid execution is the secret behind Java’s performance.

  • Fast startup → thanks to interpretation
  • Fast long-term execution → thanks to JIT

As your program keeps running, more hot code gets JIT compiled, and your application becomes faster.

7. Why Java Programs Get Faster Over Time (Warm-up Period)

When a Java application first starts:

  • Nothing is JIT-compiled
  • Everything is interpreted
  • Execution is slower

As the program runs:

  • JVM collects data
  • JIT compiler transforms hot spots
  • Optimizations accumulate

Eventually, your application reaches its peak optimized performance.

This is why long-running systems like:

  • web servers
  • microservices
  • game servers
  • big data pipelines
  • distributed computing applications

reach maximum speed only after warming up.

8. A Rare Scenario: Temporary Performance Dip During JIT

Although JIT runs in its own thread, extremely CPU-intensive applications may momentarily experience:

  • Slight dips in performance
  • Brief increases in CPU usage

Why?

Because JIT consumes CPU to generate optimized code.
However, this is almost always:

  • Short-lived
  • Minimal
  • Greatly outweighed by long-term performance gains

This effect is typically noticeable only in domains like:

  • High-frequency trading
  • Scientific simulations
  • Real-time systems
  • High-performance computing (HPC)

For most applications, users will never notice.

9. JIT and Performance Testing: A Critical Insight

If you benchmark Java code incorrectly, your results may be misleading.

Why?

Because your code’s performance changes depending on:

  • Whether it has been JIT-compiled
  • The warm-up duration
  • How many times loops or methods have executed

This leads to two distinct states:

State 1 — Pre-JIT (interpreted performance)

  • Slowest performance
  • Happens during initialization

State 2 — Post-JIT (optimized performance)

  • Fastest performance
  • Happens after warm-up

Therefore:

Always warm up your JVM before benchmarking.

Tools like JMH (Java Microbenchmark Harness) automatically handle warm-up for this reason.

10. What Exactly Gets JIT-Compiled?

Beginners often think:

“JIT compiles whole methods.”

That’s mostly true, but not the full story.

Expert-level understanding:

  • JVM compiles regions of bytecode
  • These regions often correspond to entire methods
  • But the JVM may also optimize only loops, only branches, or inlined code blocks
  • JVM may split methods into compilable sections

For practical developer understanding:

It is easiest to think of JIT as compiling methods or heavily used code blocks.

This mental model is sufficient for 99% of programming scenarios.

11. A Textual Overview Diagram: Entire JVM Execution Pipeline

Here is a full, combined textual diagram summarizing JVM execution:

          +-----------------------------------------------------+
          |                 Java Source Code                    |
          +-----------------------------------------------------+
                               |
                               | javac
                               v
          +-----------------------------------------------------+
          |                     Bytecode (.class)               |
          +-----------------------------------------------------+
                               |
                               | java command
                               v
          +-----------------------------------------------------+
          |                JVM Starts Execution                 |
          |    (Interprets bytecode instruction by instruction) |
          +-----------------------------------------------------+
                               |
     ---------------------------------------------------------------
     | JVM Profiling Engine monitors:                             |
     | - method calls                                              |
     | - loop iterations                                           |
     | - branching behavior                                        |
     | - frequent code regions                                     |
     ---------------------------------------------------------------
                               |
                               | detect hot code
                               v
          +-----------------------------------------------------+
          |            JIT Compiler Thread (Background)         |
          +-----------------------------------------------------+
                               |
                               | produces native machine code
                               v
          +-----------------------------------------------------+
          |           JVM switches to optimized native code     |
          +-----------------------------------------------------+
                               |
                     Application runs faster