Learnitweb

Implementing LRU Cache in Java

Caching is one of the most essential techniques for improving performance in systems that require fast access to frequently used data. An LRU (Least Recently Used) Cache is a type of cache algorithm that evicts the least recently used items first when the cache reaches its maximum capacity.

In this tutorial, we’ll cover:

  1. What is an LRU Cache
  2. Why we need it
  3. How it works internally
  4. Different ways to implement it in Java
  5. Step-by-step code examples
  6. Time and space complexity analysis

1. What is an LRU Cache?

An LRU Cache (Least Recently Used Cache) is a data structure that:

  • Stores a limited number of key-value pairs.
  • Ensures that when the cache is full and a new entry is added, the least recently used item is removed.
  • Updates usage order whenever a key is accessed (either read or written).

In other words, recently accessed data stays in memory, while older, unused data gets evicted when space runs out.


2. Real-World Analogy

Imagine your smartphone’s “Recent Apps” list:

  • When you open an app, it appears at the front of the list.
  • When you open more apps than your phone can remember, the oldest app (the one you haven’t used for the longest time) is removed.

That’s exactly how an LRU Cache behaves.


3. Key Operations of an LRU Cache

An LRU Cache supports two main operations:

OperationDescriptionExpected Time Complexity
get(key)Returns the value for the given key if it exists; otherwise returns -1. Marks the key as recently used.O(1)
put(key, value)Inserts or updates the value for the key. If the cache exceeds capacity, evicts the least recently used entry.O(1)

4. How Does an LRU Cache Work Internally?

The LRU cache must efficiently:

  1. Access items quickly – so we need a HashMap for O(1) lookups.
  2. Track the usage order – so we need a Doubly Linked List to move recently used elements to the front and remove old ones from the end efficiently.

Structure

  • HashMap:
    Maps keys to nodes in the linked list (key → Node).
  • Doubly Linked List:
    Maintains the usage order, with most recently used items at the front and least recently used at the back.

LRU Cache Design

[Most Recently Used]  head → node1 ↔ node2 ↔ node3 ↔ ... ↔ nodeN  ← tail [Least Recently Used]
  • When a key is accessed: move its node to the front.
  • When adding a new key (and cache is full): remove node from tail.

5. Implementation Approach 1: Using LinkedHashMap

Java’s LinkedHashMap already maintains insertion order and allows access-order tracking through a simple constructor flag.
We can extend it to build an elegant LRU cache in just a few lines.

Code Example

import java.util.LinkedHashMap;
import java.util.Map;

class LRUCache<K, V> extends LinkedHashMap<K, V> {
    private final int capacity;

    public LRUCache(int capacity) {
        // accessOrder = true maintains order of access (not insertion)
        super(capacity, 0.75f, true);
        this.capacity = capacity;
    }

    @Override
    protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
        // Automatically remove least recently used entry when size exceeds capacity
        return size() > capacity;
    }
}

public class LRUCacheExample {
    public static void main(String[] args) {
        LRUCache<Integer, String> cache = new LRUCache<>(3);
        cache.put(1, "A");
        cache.put(2, "B");
        cache.put(3, "C");

        cache.get(1); // Access 1 -> marks it as recently used
        cache.put(4, "D"); // Removes least recently used (key 2)

        System.out.println(cache); // Output: {3=C, 1=A, 4=D}
    }
}

Explanation

  • We create a subclass of LinkedHashMap and enable access-order by passing true to the constructor.
  • We override removeEldestEntry to automatically remove the least recently used entry once the cache size exceeds the limit.
  • The LinkedHashMap automatically reorders elements on each get() or put().

Time Complexity

  • get() → O(1)
  • put() → O(1)

Advantages

  • Very concise and easy to maintain.
  • Internally optimized by the JDK.

Limitations

  • Not thread-safe (for concurrent environments, wrap with synchronization or use ConcurrentHashMap + custom logic).

6. Implementation Approach 2: Custom Implementation Using HashMap + Doubly Linked List

If you want to understand how LRU cache works internally, it’s best to implement it yourself using core data structures.

Steps

  1. Create a Node class for the doubly linked list.
  2. Maintain a HashMap to store key → node mappings.
  3. Implement methods to:
    • Add a node to the front (most recently used).
    • Remove a node.
    • Move a node to the front when accessed.
    • Remove the least recently used node from the end.

Complete Code

import java.util.HashMap;

class LRUCache {
    private class Node {
        int key, value;
        Node prev, next;
        Node(int key, int value) {
            this.key = key;
            this.value = value;
        }
    }

    private final int capacity;
    private HashMap<Integer, Node> cache;
    private Node head, tail;

    public LRUCache(int capacity) {
        this.capacity = capacity;
        this.cache = new HashMap<>();

        head = new Node(0, 0); // dummy head
        tail = new Node(0, 0); // dummy tail
        head.next = tail;
        tail.prev = head;
    }

    public int get(int key) {
        if (!cache.containsKey(key)) return -1;
        Node node = cache.get(key);
        remove(node);
        addToFront(node);
        return node.value;
    }

    public void put(int key, int value) {
        if (cache.containsKey(key)) {
            remove(cache.get(key));
        } else if (cache.size() == capacity) {
            // Remove LRU from tail
            cache.remove(tail.prev.key);
            remove(tail.prev);
        }
        Node newNode = new Node(key, value);
        addToFront(newNode);
        cache.put(key, newNode);
    }

    private void addToFront(Node node) {
        node.next = head.next;
        node.prev = head;
        head.next.prev = node;
        head.next = node;
    }

    private void remove(Node node) {
        node.prev.next = node.next;
        node.next.prev = node.prev;
    }

    public void printCache() {
        Node curr = head.next;
        System.out.print("Cache state: ");
        while (curr != tail) {
            System.out.print("[" + curr.key + "=" + curr.value + "] ");
            curr = curr.next;
        }
        System.out.println();
    }
}

public class CustomLRUCacheExample {
    public static void main(String[] args) {
        LRUCache cache = new LRUCache(3);
        cache.put(1, 10);
        cache.put(2, 20);
        cache.put(3, 30);
        cache.printCache(); // [3=30] [2=20] [1=10]

        cache.get(1); // Access 1 -> becomes most recent
        cache.printCache(); // [1=10] [3=30] [2=20]

        cache.put(4, 40); // Evict least recently used (key=2)
        cache.printCache(); // [4=40] [1=10] [3=30]
    }
}

Explanation

  1. The HashMap ensures O(1) lookup for keys.
  2. The Doubly Linked List keeps track of the access order.
    • The head is most recently used.
    • The tail is least recently used.
  3. Each access (get or put) moves a node to the front.
  4. When capacity is exceeded, the least recently used node (just before the tail) is removed.

7. Time and Space Complexity

OperationTime ComplexityExplanation
get()O(1)Lookup in HashMap and moving node in linked list
put()O(1)Insertion + removal from doubly linked list
SpaceO(capacity)For storing nodes and HashMap entries

8. Choosing Between Approaches

ImplementationProsCons
LinkedHashMapSimple, concise, internally optimizedNot thread-safe, less flexible for custom logic
Custom (HashMap + DLL)Full control, better for interview/demoMore code, manual maintenance

If you’re building a production system with minimal concurrency, use LinkedHashMap.
If you’re learning data structures or implementing for interviews, build the custom approach.


9. Enhancements (Advanced Ideas)

  1. Thread Safety:
    Wrap operations with synchronized or use ConcurrentHashMap for multi-threaded access.
  2. TTL (Time-to-Live):
    Add timestamps to nodes and evict items after expiration.
  3. Metrics:
    Track cache hits/misses for performance monitoring.
  4. Integration with Database:
    Implement write-through or write-back strategies for persistence.

10. Summary

FeatureDescription
DefinitionCache that removes the least recently used item when full
Core Operationsget(key) and put(key, value)
GoalMaintain quick access and efficient eviction policy
Best Data StructuresHashMap + Doubly Linked List
Time ComplexityO(1) for both get and put
Space ComplexityO(capacity)