LRU Cache implementation

This one is a very known algorithm that is often taken as example for students but it’s an interesting one.

If someone ask you to implement a LRU (Least Recently Used) Cache. How would you do it?
Since it’s a cache you need to guarantee a O(1) for reading and writing. For a fast access, hash tables are very often the right data structure so we can consider it, but we need to keep the order and hash tables cannot do that.
An LRU cache is also a FIFO (First In First Out) data structure, a queue looks very adapted too but we loose the O(1) access time.

A good approach is to use both:

  • An hash table for the O(1) lookup time
  • A queue to keep the order

The only one problem is that queues are very effective for enqueue and dequeue but very slow for random access, and since each hit has to reorder the queue, those operations would lead to a O(n) lookup time for rearranging the queue every time we access the cache.

The good strategy is to keep this approach but use a double linked list instead of a queue because:

  • Very easy to implement a queue with it
  • Still slow for random access but easy to move a given node to the head and it’s all we need

Let’s focus on the implementation. We need first to define a node that contains the pair key/value:

class Node {
    int key;
    int data;
    Node previous;
    Node next;

Your LRUCache definition:

class LRUCache {

    private int capacity;
    private Map<Integer, Node> data;
    private Node head;
    private Node end;

    public LRUCache(int capacity) {
        this.capacity = capacity; = new HashMap<>();

Then we write an add method that append the node as the head:

    private void add(Node node) {

        // Reset position = null;
        node.previous = null;

        // First element
        if (null == this.head) {
            this.head = node;
            this.end = node;

        // Existing element
        this.head.previous = node; = this.head;
        this.head = node;

Same thing for the remove:

    private void remove(Node node) {

        // Nothing to do
        if (this.head == null || null == node) {

        // The only one item
        if (this.head == this.end && this.head == node) {
            this.head = null;
            this.end = null;

        // Remove from head
        if (node == this.head) {
   = null;
            this.head =;

        // Remove from end
        if (node == this.end) {
   = null;
            this.end = this.end.previous;

        // Remove in the middle =; = node.previous;


Two methods that help to move a node to the head (for hits), and a remove last that cleanup the oldest item:

    private void moveFirst(Node node) {

    private void removeLast() {

The linked list is partially implemented (enough for our needs).
The LRU get method simply retrieve the key and move in to the head in the list.

public int get(int key) {

        // Existing key
        if ( {

            // Move to first place
            Node node =;

            // Return the value

        // Not found
        return -1;

Last method, the set method generates an hit to move the accessed element to the head.
Like the get method, the set method deals with both hash table and linked list:

    public void set(int key, int value) {

        // Existing slot
        if ( {
            Node node =;
   = value;

        // Out of capacity, cleaning the oldest slot
        if ( >= this.capacity) {
            int id = this.end.key;

        // New slot
        Node node = new Node();
        node.key = key; = value;
        this.add(node);, node);

Of course you probably shouldn’t implement your own cache or linked list, there are so many very efficient implementations.
In Java you can implement your LRUCache in a couple of instructions reusing java built in data structures but here we just want to understand what’s going on and write our own for learning purpose.
Notice also that this implementation is not thread safe too.