Learnitweb

Comparison of newCachedThreadPool() vs newFixedThreadPool()

1. What is a Thread Pool?

A thread pool is a group of pre-instantiated reusable threads that are maintained to execute tasks. Instead of launching a new thread every time a task arrives (which is costly in terms of time and system resources), tasks are submitted to a queue, and threads from the pool handle them one by one or concurrently.

Thread pools help in:

  • Improving performance by reducing the overhead of thread creation/destruction.
  • Controlling concurrency and limiting resource usage.
  • Reusing threads, which reduces latency and memory usage.

Java’s Executors class provides factory methods to create different kinds of thread pools.

2. Executors.newCachedThreadPool()

This thread pool is designed for applications that launch many short-lived asynchronous tasks. It does not impose a fixed size; instead, it grows dynamically based on the workload.

If an existing idle thread is available, it reuses that thread. Otherwise, it creates a new thread to handle the task. Threads that remain idle for 60 seconds are automatically removed from the pool to free up system resources.

2.1 Internal Configuration

It internally uses the ThreadPoolExecutor class with the following configuration:

new ThreadPoolExecutor(
    0,                          // corePoolSize: No core threads initially
    Integer.MAX_VALUE,          // maximumPoolSize: Can grow without bounds
    60L, TimeUnit.SECONDS,      // keepAliveTime: Idle threads are kept for 60 sec
    new SynchronousQueue<Runnable>() // No internal queue, direct hand-off
)

2.2 Detailed Behavior

  • Core Threads = 0: No threads are kept alive by default. All threads are temporary unless actively running a task.
  • Max Threads = Infinite: It creates new threads as needed, meaning it can scale massively under heavy load.
  • Queue = None: A SynchronousQueue does not hold tasks. If no thread is available to immediately execute a task, a new thread is created.
  • Idle Timeout = 60 seconds: If a thread remains idle for more than a minute, it is terminated and removed.

2.3 When to Use

Use newCachedThreadPool() when:

  • You have a large number of small, short-lived tasks.
  • Tasks are I/O-bound, such as making API calls or database queries.
  • You need a high level of concurrency, and throughput is more important than resource constraints.
  • You can tolerate memory usage spikes during traffic bursts.

2.4 Potential Pitfall

Because there is no upper bound on the number of threads, under high load, it can create too many threads, potentially leading to:

  • CPU over-utilization.
  • Memory exhaustion (OutOfMemoryError).
  • Performance degradation due to excessive context switching.

3. Executors.newFixedThreadPool(int nThreads)

This method creates a thread pool with a fixed number of threads. No matter how many tasks you submit, only a fixed number of them (equal to nThreads) can run concurrently.

If all threads are busy, new tasks are placed in a queue and wait until a thread becomes free.

3.1 Internal Configuration

new ThreadPoolExecutor(
    nThreads,                     // corePoolSize: Fixed number of threads
    nThreads,                     // maximumPoolSize: Same as core
    0L, TimeUnit.MILLISECONDS,    // keepAliveTime: Irrelevant since threads are always alive
    new LinkedBlockingQueue<Runnable>() // Tasks are queued if no threads are available
)

3.2 Detailed Behavior

  • Fixed Threads: The pool always maintains exactly n threads.
  • Task Queuing: Extra tasks are added to the LinkedBlockingQueue, a thread-safe unbounded queue.
  • Thread Reuse: Once a task completes, the thread is reused for the next task from the queue.
  • No Thread Termination: Threads are never removed due to idleness. They live for the duration of the pool.

3.3 When to Use

Use newFixedThreadPool() when:

  • You have predictable workloads.
  • You want to limit resource usage, e.g., CPU or memory.
  • Your tasks are CPU-intensive or long-running.
  • You want better backpressure handling — new tasks are queued rather than creating more threads.
  • You want fine control over concurrency by setting the thread count based on system capacity.

3.4 Benefits

  • Prevents resource exhaustion.
  • Provides consistent throughput under varying loads.
  • Makes it easy to control system behavior (e.g., in server environments).

4. Key Differences: Side-by-Side Comparison

FeaturenewCachedThreadPool()newFixedThreadPool(int n)
Thread CountGrows dynamically. Starts from 0, can go up to Integer.MAX_VALUE.Fixed number of threads, as specified.
Task QueueNo queue. Uses SynchronousQueue which hands off tasks directly.Uses LinkedBlockingQueue to queue tasks.
Thread LifetimeIdle threads are removed after 60 seconds.Threads remain alive indefinitely.
ConcurrencyHigh, since threads are created on-demand.Limited by the fixed thread count.
Best ForShort-lived, I/O-bound, bursty tasks.CPU-bound or long-running tasks with predictable concurrency.
Memory RiskHigh — can cause OOM if too many threads are created.Low — queueing prevents thread explosion.
Backpressure HandlingNone. Will keep creating threads on task overflow.Yes. Tasks are queued when all threads are busy.

5. Execution Flow Diagrams

newCachedThreadPool() Flow

              +--------------------+
              |     New Task       |
              +--------------------+
                       |
                       v
      +-------------------------------+
      | Any idle thread available?    |
      +-------------------------------+
           |               |
          Yes             No
           |               |
           v               v
+------------------+   +------------------+
| Reuse idle thread|   | Create new thread|
+------------------+   +------------------+

newFixedThreadPool(n) Flow

              +--------------------+
              |     New Task       |
              +--------------------+
                       |
                       v
      +-------------------------------+
      | Any thread available to run? |
      +-------------------------------+
           |               |
          Yes             No
           |               |
           v               v
+------------------+   +----------------------------+
| Assign to thread |   | Queue task until thread is |
+------------------+   | available                  |
                       +----------------------------+

6. Code Examples

newCachedThreadPool() Example

ExecutorService executor = Executors.newCachedThreadPool();

for (int i = 1; i <= 10; i++) {
    final int taskId = i;
    executor.submit(() -> {
        System.out.println("Running Task " + taskId + " in " + Thread.currentThread().getName());
        try { Thread.sleep(1000); } catch (InterruptedException e) {}
    });
}
executor.shutdown();

newFixedThreadPool(3) Example

ExecutorService executor = Executors.newFixedThreadPool(3);

for (int i = 1; i <= 10; i++) {
    final int taskId = i;
    executor.submit(() -> {
        System.out.println("Running Task " + taskId + " in " + Thread.currentThread().getName());
        try { Thread.sleep(1000); } catch (InterruptedException e) {}
    });
}
executor.shutdown();

7. Use Case Scenarios

Use Case for newCachedThreadPool:

Imagine a web crawler that quickly spawns small I/O-bound tasks to fetch data from websites. You want fast response and can afford memory usage peaks. A cached thread pool will serve best here because it can scale up quickly and clean up idle threads later.

Use Case for newFixedThreadPool:

A backend service processing orders from a queue. Each order takes a consistent amount of time and resources. To maintain predictable system behavior, a fixed pool ensures no more than n orders are processed simultaneously, while others wait in the queue.