Java 实现 LRU 缓存的最佳方式

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/6398902/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-16 05:36:05  来源:igfitidea点击:

Best way to implement LRU cache

javac++caching

提问by Amm Sokun

I was looking at this problem of LRU cache implementation where after the size of the cache is full, the least recently used item is popped out and it is replaced by the new item.

我正在查看 LRU 缓存实现的这个问题,其中在缓存大小已满后,弹出最近最少使用的项目并由新项目替换。

I have two implementations in mind:

我有两个实现:

1). Create two maps which looks something like this

1)。创建两个看起来像这样的地图

std::map<timestamp, k> time_to_key
std::map<key, std::pair<timestamp, V>> LRUCache

To insert a new element, we can put the current timestamp and value in the LRUCache. While when the size of the cache is full, we can evict the least recent element by finding the smallest timestamp present in time_to_keyand removing the corresponding key from LRUCache. Inserting a new item is O(1), updating the timestamp is O(n) (because we need to search the kcorresponding to the timestamp in time_to_key.

要插入新元素,我们可以将当前时间戳和值放入LRUCache 中。而当缓存的大小已满时,我们可以通过找到时间_to_键中存在的最小时间戳并从LRUCache 中删除相应的键来驱逐最近的元素。插入一个新的项目为O(1),更新时间戳是O(n)(因为我们需要搜索ķ对应时间戳时间_to_关键

2). Have a linked list in which the least recently used cache is present at the head and the new item is added at the tail. When an item arrives which is already present in the cache, the node corresponding to the key of that item is moved to the tail of the list. Inserting a new item is O(1), updating the timestamp is again O(n) (because we need to move to the tail of the list), and deleting an element is O(1).

2)。有一个链表,其中最近最少使用的缓存位于头部,新项目添加在尾部。当一个已经存在于缓存中的项目到达时,与该项目的键对应的节点被移动到列表的尾部。插入一个新项是O(1),更新时间戳又是O(n)(因为我们需要移动到列表的尾部),删除一个元素是O(1)。

Now I have the following questions:

现在我有以下问题:

  1. Which one of these implementations is better for an LRUCache.

  2. Is there any other way to implement the LRU Cache.

  3. In Java, should I use HashMap to implement the LRUCache

  4. I have seen questions like, implement a generic LRU cache, and also have seen questions like implementing an LRU cache. Is generic LRU cache different from LRU cache?

  1. 这些实现中的哪一个更适合 LRUCache。

  2. 有没有其他方式来实现LRU Cache。

  3. 在 Java 中,我应该使用 HashMap 来实现 LRUCache

  4. 我见过诸如实现通用 LRU 缓存之类的问题,也见过诸如实现 LRU 缓存之类的问题。通用 LRU 缓存与 LRU 缓存不同吗?

Thanks in advance!!!

提前致谢!!!

EDIT:

编辑:

Another way (easiest way) to implement LRUCache in Java is by using LinkedHashMap and by overriding the boolean removeEldestEntry(Map.entry eldest) function.

在 Java 中实现 LRUCache 的另一种方法(最简单的方法)是使用 LinkedHashMap 并覆盖布尔 removeEldestEntry(Map.entry eldest) 函数。

采纳答案by Peter Lawrey

If you want an LRU cache, the simplest in Java is LinkedHashMap. The default behaviour is FIFO however you can changes it to "access order" which makes it an LRU cache.

如果你想要一个 LRU 缓存,Java 中最简单的就是 LinkedHashMap。默认行为是 FIFO,但是您可以将其更改为“访问顺序”,使其成为 LRU 缓存。

public static <K,V> Map<K,V> lruCache(final int maxSize) {
    return new LinkedHashMap<K, V>(maxSize*4/3, 0.75f, true) {
        @Override
        protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
            return size() > maxSize;
        }
    };
}

Note: I have using the constructorwhich changes the collection from newest first to most recently used first.

注意:我使用构造函数将集合从最新的第一个更改为最近使用的第一个。

From the Javadoc

来自 Javadoc

public LinkedHashMap(int initialCapacity,
                     float loadFactor,
                     boolean accessOrder)
Constructs an empty LinkedHashMap instance with the specified initial capacity, load factor and ordering mode.
Parameters:
initialCapacity - the initial capacity
loadFactor - the load factor
accessOrder - the ordering mode - true for access-order, false for insertion-order

When accessOrder is truethe LinkedHashMap re-arranges the order of the map whenever you get() an entry which is not the last one.

当 accessOrder 是trueLinkedHashMap 时,每当您 get() 不是最后一个条目时,都会重新排列地图的顺序。

This way the oldest entry is the least which is recently used.

这样,最旧的条目是最近使用的最少的条目。

回答by Puppy

Normally, LRU caches are represented as LIFO structures- a single queue of elements. If the one provided by your Standard doesn't allow you to remove objects from the middle, for example, to put them on the top, then you may have to roll your own.

通常,LRU 缓存表示为 LIFO 结构——单个元素队列。如果您的标准提供的标准不允许您从中间移除对象,例如,将它们放在顶部,那么您可能必须自己滚动。

回答by aditrip

Considering the cache allows concurrent access, the problem of good design of LRU cache boils down to:

考虑到缓存允许并发访问,LRU缓存设计好的问题归结为:

a) Can we avoid using mutex while updating the two structures(cache structure and LRU structure).

a) 我们可以在更新两个结构(缓存结构和 LRU 结构)时避免使用互斥锁吗?

b) Will the read (get) operation of cache require mutex?

b) 缓存的读取(获取)操作是否需要互斥锁?

In more detail: Say, if we implements this using java.util.concurrent.ConcuurentHashMap(cache structure) and java.util.concurrent.ConcurrentLinkedQueue(LRU structure)

更详细地说:假设我们使用 java.util.concurrent.ConcuurentHashMap(cache structure) 和 java.util.concurrent.ConcurrentLinkedQueue(LRU structure) 来实现这个

a)Locking both these two structures on edit operations - addEntry(), removeEntry(), evictEntries() etc.

a) 在编辑操作中锁定这两个结构 - addEntry()、removeEntry()、evictEntries() 等。

b) The above could probably pass as slow write operations, but the problem is even for read (get) operation, we need to apply lock on both the structures. Because, get will mean putting the entry in front of the queue for LRU strategy.(Assuming entries are removed from end of the queue).

b) 以上可能会作为慢写操作通过,但问题是即使对于读取(获取)操作,我们也需要对两个结构应用锁。因为,获取意味着将条目放在 LRU 策略的队列前面。(假设条目从队列的末尾删除)。

Using efficient concurrent structures like ConcurrentHashMap and wait free ConcurrentLinkedQueue and then putting a lock over them is loosing the whole purpose of it.

使用像 ConcurrentHashMap 这样的高效并发结构并等待空闲的 ConcurrentLinkedQueue 然后对它们加锁会失去它的全部目的。

I implemented an LRU cache with the same approach, however, the LRU structured was updated asynchronously eliminating the need of using any mutex while accessing these structures. LRU is an internal detail of the Cache and can be implemented in any manner without affecting the users of the cache.

我使用相同的方法实现了 LRU 缓存,但是,LRU 结构是异步更新的,无需在访问这些结构时使用任何互斥锁。LRU 是 Cache 的内部细节,可以以任何方式实现,而不会影响缓存的用户。

Later, I also read about ConcurrentLinkedHashMap

后来,我还阅读了 ConcurrentLinkedHashMap

https://code.google.com/p/concurrentlinkedhashmap/

https://code.google.com/p/concurrentlinkedhashmap/

And found that it is also trying to do the same thing. Not used this structure but perhaps might be a good fit.

并发现它也在尝试做同样的事情。没有使用过这种结构,但可能很合适。

回答by RickHigh

I'd like to expand on some of these suggestions with two possible implementations. One not thread safe, and one that might be.

我想通过两种可能的实现来扩展其中的一些建议。一种不是线程安全的,一种可能是。

Here is a simplest version with a unit test that shows it works.

这是一个最简单的版本,带有一个单元测试,表明它可以工作。

First the non-concurrent version:

首先是非并发版本:

import java.util.LinkedHashMap;
import java.util.Map;

public class LruSimpleCache<K, V> implements LruCache <K, V>{

    Map<K, V> map = new LinkedHashMap (  );


    public LruSimpleCache (final int limit) {
           map = new LinkedHashMap <K, V> (16, 0.75f, true) {
               @Override
               protected boolean removeEldestEntry(final Map.Entry<K, V> eldest) {
                   return super.size() > limit;
               }
           };
    }
    @Override
    public void put ( K key, V value ) {
        map.put ( key, value );
    }

    @Override
    public V get ( K key ) {
        return map.get(key);
    }

    //For testing only
    @Override
    public V getSilent ( K key ) {
        V value =  map.get ( key );
        if (value!=null) {
            map.remove ( key );
            map.put(key, value);
        }
        return value;
    }

    @Override
    public void remove ( K key ) {
        map.remove ( key );
    }

    @Override
    public int size () {
        return map.size ();
    }

    public String toString() {
        return map.toString ();
    }


}

The true flag will track the access of gets and puts. See JavaDocs. The removeEdelstEntry without the true flag to the constructor would just implement a FIFO cache (see notes below on FIFO and removeEldestEntry).

true 标志将跟踪获取和放置的访问。请参阅 JavaDocs。没有构造函数的 true 标志的 removeEdelstEntry 只会实现一个 FIFO 缓存(参见下面关于 FIFO 和 removeEldestEntry 的注释)。

Here is the test that proves it works as an LRU cache:

这是证明它作为 LRU 缓存工作的测试:

public class LruSimpleTest {

    @Test
    public void test () {
        LruCache <Integer, Integer> cache = new LruSimpleCache<> ( 4 );


        cache.put ( 0, 0 );
        cache.put ( 1, 1 );

        cache.put ( 2, 2 );
        cache.put ( 3, 3 );


        boolean ok = cache.size () == 4 || die ( "size" + cache.size () );


        cache.put ( 4, 4 );
        cache.put ( 5, 5 );
        ok |= cache.size () == 4 || die ( "size" + cache.size () );
        ok |= cache.getSilent ( 2 ) == 2 || die ();
        ok |= cache.getSilent ( 3 ) == 3 || die ();
        ok |= cache.getSilent ( 4 ) == 4 || die ();
        ok |= cache.getSilent ( 5 ) == 5 || die ();


        cache.get ( 2 );
        cache.get ( 3 );
        cache.put ( 6, 6 );
        cache.put ( 7, 7 );
        ok |= cache.size () == 4 || die ( "size" + cache.size () );
        ok |= cache.getSilent ( 2 ) == 2 || die ();
        ok |= cache.getSilent ( 3 ) == 3 || die ();
        ok |= cache.getSilent ( 4 ) == null || die ();
        ok |= cache.getSilent ( 5 ) == null || die ();


        if ( !ok ) die ();

    }

Now for the concurrent version...

现在对于并发版本...

import java.util.LinkedHashMap;
import java.util.Map;
import java.util.concurrent.locks.ReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock;

public class LruSimpleConcurrentCache<K, V> implements LruCache<K, V> {

    final CacheMap<K, V>[] cacheRegions;


    private static class CacheMap<K, V> extends LinkedHashMap<K, V> {
        private final ReadWriteLock readWriteLock;
        private final int limit;

        CacheMap ( final int limit, boolean fair ) {
            super ( 16, 0.75f, true );
            this.limit = limit;
            readWriteLock = new ReentrantReadWriteLock ( fair );

        }

        protected boolean removeEldestEntry ( final Map.Entry<K, V> eldest ) {
            return super.size () > limit;
        }


        @Override
        public V put ( K key, V value ) {
            readWriteLock.writeLock ().lock ();

            V old;
            try {

                old = super.put ( key, value );
            } finally {
                readWriteLock.writeLock ().unlock ();
            }
            return old;

        }


        @Override
        public V get ( Object key ) {
            readWriteLock.writeLock ().lock ();
            V value;

            try {

                value = super.get ( key );
            } finally {
                readWriteLock.writeLock ().unlock ();
            }
            return value;
        }

        @Override
        public V remove ( Object key ) {

            readWriteLock.writeLock ().lock ();
            V value;

            try {

                value = super.remove ( key );
            } finally {
                readWriteLock.writeLock ().unlock ();
            }
            return value;

        }

        public V getSilent ( K key ) {
            readWriteLock.writeLock ().lock ();

            V value;

            try {

                value = this.get ( key );
                if ( value != null ) {
                    this.remove ( key );
                    this.put ( key, value );
                }
            } finally {
                readWriteLock.writeLock ().unlock ();
            }
            return value;

        }

        public int size () {
            readWriteLock.readLock ().lock ();
            int size = -1;
            try {
                size = super.size ();
            } finally {
                readWriteLock.readLock ().unlock ();
            }
            return size;
        }

        public String toString () {
            readWriteLock.readLock ().lock ();
            String str;
            try {
                str = super.toString ();
            } finally {
                readWriteLock.readLock ().unlock ();
            }
            return str;
        }


    }

    public LruSimpleConcurrentCache ( final int limit, boolean fair ) {
        int cores = Runtime.getRuntime ().availableProcessors ();
        int stripeSize = cores < 2 ? 4 : cores * 2;
        cacheRegions = new CacheMap[ stripeSize ];
        for ( int index = 0; index < cacheRegions.length; index++ ) {
            cacheRegions[ index ] = new CacheMap<> ( limit / cacheRegions.length, fair );
        }
    }

    public LruSimpleConcurrentCache ( final int concurrency, final int limit, boolean fair ) {

        cacheRegions = new CacheMap[ concurrency ];
        for ( int index = 0; index < cacheRegions.length; index++ ) {
            cacheRegions[ index ] = new CacheMap<> ( limit / cacheRegions.length, fair );
        }
    }

    private int stripeIndex ( K key ) {
        int hashCode = key.hashCode () * 31;
        return hashCode % ( cacheRegions.length );
    }

    private CacheMap<K, V> map ( K key ) {
        return cacheRegions[ stripeIndex ( key ) ];
    }

    @Override
    public void put ( K key, V value ) {

        map ( key ).put ( key, value );
    }

    @Override
    public V get ( K key ) {
        return map ( key ).get ( key );
    }

    //For testing only
    @Override
    public V getSilent ( K key ) {
        return map ( key ).getSilent ( key );

    }

    @Override
    public void remove ( K key ) {
        map ( key ).remove ( key );
    }

    @Override
    public int size () {
        int size = 0;
        for ( CacheMap<K, V> cache : cacheRegions ) {
            size += cache.size ();
        }
        return size;
    }

    public String toString () {

        StringBuilder builder = new StringBuilder ();
        for ( CacheMap<K, V> cache : cacheRegions ) {
            builder.append ( cache.toString () ).append ( '\n' );
        }

        return builder.toString ();
    }


}

You can see why I cover the non-concurrent version first. The above attempts to create some stripes to reduce lock contention. So we it hashes the key and then looks up that hash to find the actual cache. This makes the limit size more of a suggestion/rough guess within a fair amount of error depending on how well spread your keys hash algorithm is.

您可以理解为什么我首先介绍非并发版本。以上尝试创建一些条带以减少锁争用。所以我们对键进行散列,然后查找该散列以找到实际的缓存。这使得限制大小在相当大的误差范围内更像是建议/粗略猜测,具体取决于您的密钥散列算法的传播程度。

回答by learner

For O(1) access we need a hash table and to maintain order we can use DLL. Basic algo is - from page number we can get to DLL node using hash table. If page exists we can move the node to the head of DLL else insert the node in DLL and hash table. If the size of DLL is full we can remove the least recently used node from tail.

对于 O(1) 访问,我们需要一个哈希表,为了维护顺序,我们可以使用 DLL。基本算法是 - 从页码我们可以使用哈希表到达 DLL 节点。如果页面存在,我们可以将节点移动到 DLL 的头部,否则将节点插入 DLL 和哈希表中。如果 DLL 的大小已满,我们可以从尾部删除最近最少使用的节点。

Here is an implementation based on doubly linked list and unordered_map in C++.

这是一个基于 C++ 中的双向链表和 unordered_map 的实现。

#include <iostream>
#include <unordered_map>
#include <utility>

using namespace std;

// List nodeclass
class Node
{
    public:
    int data;
    Node* next;
    Node* prev;
};

//Doubly Linked list
class DLList
{
    public:

    DLList()
    {
        head = NULL;
        tail = NULL;
        count = 0;
    }

    ~DLList() {}

    Node* addNode(int val);
    void print();
    void removeTail();
    void moveToHead(Node* node);

    int size()
    {
        return count;
    }

    private:
    Node* head;
    Node* tail;
    int count;
};

// Function to add a node to the list

Node* DLList::addNode(int val)
{
    Node* temp = new Node();

    temp->data = val;
    temp->next = NULL;
    temp->prev = NULL;

    if ( tail == NULL )
    {
        tail = temp;
        head = temp;
    }
    else
    {
        head->prev = temp;
        temp->next = head;
        head = temp;
    }

    count++;

    return temp;
}

void DLList::moveToHead(Node* node)
{
    if (head == node)
        return;

    node->prev->next = node->next;

    if (node->next != NULL)
    {
        node->next->prev = node->prev;
    }
    else
    {
        tail = node->prev;
    }
        node->next = head;
        node->prev = NULL;
        head->prev = node;
        head = node;
}

void DLList::removeTail()
{
    count--;

    if (head == tail)
    {
        delete head;
        head = NULL;
        tail = NULL;
    }
    else
    {
        Node* del = tail;
        tail = del->prev;
        tail->next = NULL;
        delete del;
    }
}

void DLList::print()
{
    Node* temp = head;

    int ctr = 0;

    while ( (temp != NULL) && (ctr++ != 25) )
    {
        cout << temp->data << " ";
        temp = temp->next;
    }
    cout << endl;
}

class LRUCache
{
    public:
        LRUCache(int aCacheSize);
        void fetchPage(int pageNumber);

    private:
        int cacheSize;
        DLList dlist;
        unordered_map< int, Node* > directAccess;
};

    LRUCache::LRUCache(int aCacheSize):cacheSize(aCacheSize) { }

    void LRUCache::fetchPage(int pageNumber)
    {
        unordered_map< int, Node* >::const_iterator it = directAccess.find(pageNumber);

        if (it != directAccess.end())
        {
            dlist.moveToHead( (Node*)it->second);
        }
        else
        {
            if (dlist.size() == cacheSize-1)
               dlist.removeTail();

            Node* node = dlist.addNode(pageNumber);

            directAccess.insert(pair< int, Node* >(pageNumber,node));
        }

        dlist.print();
    }

    int main()
    {
        LRUCache lruCache(10);

        lruCache.fetchPage(5);
        lruCache.fetchPage(7);
        lruCache.fetchPage(15);
        lruCache.fetchPage(34);
        lruCache.fetchPage(23);
        lruCache.fetchPage(21);
        lruCache.fetchPage(7);
        lruCache.fetchPage(32);
        lruCache.fetchPage(34);
        lruCache.fetchPage(35);
        lruCache.fetchPage(15);
        lruCache.fetchPage(37);
        lruCache.fetchPage(17);
        lruCache.fetchPage(28);
        lruCache.fetchPage(16);

        return 0;
}

回答by vaquar khan

Problem Statement :

问题陈述 :

Create LRU cache and store Employee object Max =5 objects and find out who login first and after ….

创建 LRU 缓存并存储 Employee 对象 Max =5 个对象,并找出谁先登录后登录……。

package com.test.example.dto;

import java.sql.Timestamp;
/**
 * 
 * @author [email protected]
 *
 */
public class Employee implements Comparable<Employee> {
    private int     id;
    private String  name;
    private int     age;
    private Timestamp loginTime ;

public int getId() {
    return id;
}

public void setId(int id) {
    this.id = id;
}

public String getName() {
    return name;
}

public void setName(String name) {
    this.name = name;
}

public int getAge() {
    return age;
}

public void setAge(int age) {
    this.age = age;
}

public Timestamp getLoginTime() {
    return loginTime;
}

public void setLoginTime(Timestamp loginTime) {
    this.loginTime = loginTime;
}

@Override
public String toString() {
    return "Employee [id=" + id + ", name=" + name + ", age=" + age + ", loginTime=" + loginTime + "]";
}

Employee(){}

public Employee(int id, String name, int age, Timestamp loginTime) {
    super();
    this.id = id;
    this.name = name;
    this.age = age;
    this.loginTime = loginTime;
}

@Override
public int hashCode() {
    final int prime = 31;
    int result = 1;
    result = prime * result + age;
    result = prime * result + id;
    result = prime * result + ((loginTime == null) ? 0 : loginTime.hashCode());
    result = prime * result + ((name == null) ? 0 : name.hashCode());
    return result;
}

@Override
public boolean equals(Object obj) {
    if (this == obj) return true;
    if (obj == null) return false;
    if (getClass() != obj.getClass()) return false;
    Employee other = (Employee) obj;
    if (age != other.age) return false;
    if (id != other.id) return false;
    if (loginTime == null) {
        if (other.loginTime != null) return false;
    } else if (!loginTime.equals(other.loginTime)) return false;
    if (name == null) {
        if (other.name != null) return false;
    } else if (!name.equals(other.name)) return false;
    return true;
}

@Override
public int compareTo(Employee emp) {
    if (emp.getLoginTime().before( this.loginTime) ){
        return 1;
    } else if (emp.getLoginTime().after(this.loginTime)) {
        return -1;
    }
    return 0;
}


}

LRUObjectCacheExample

LRU 对象缓存示例

package com.test.example;

import java.sql.Timestamp;
import java.util.Calendar;
import java.util.LinkedHashMap;
import java.util.Map;
import java.util.Map.Entry;
import com.test.example.dto.Employee;
/**
 * 
 * @author [email protected]
 *
 */
public class LRUObjectCacheExample {


    LinkedHashMap<Employee, Boolean>    lruCacheLinkedQueue;

public LRUObjectCacheExample(int capacity) {
    lruCacheLinkedQueue = new LinkedHashMap<Employee, Boolean>(capacity, 1.0f, true) {
        /**
         * 
         */
        private static final long   serialVersionUID    = 1L;

        @Override
        protected boolean removeEldestEntry(
                //calling map's entry method
                Map.Entry<Employee, Boolean> eldest) {
            return this.size() > capacity;
        }
    };
}

void addDataIntoCache(Employee employee) {
    lruCacheLinkedQueue.put(employee, true);
    displayLRUQueue();
}

boolean checkIfDataPresentIntoLRUCaache(int data) {
    return lruCacheLinkedQueue.get(data) != null;
}

 void deletePageNo(int data) {
    if (lruCacheLinkedQueue.get(data) != null){
            lruCacheLinkedQueue.remove(data);
    }
    displayLRUQueue();
}

 void displayLRUQueue() {
    System.out.print("-------------------------------------------------------"+"\n");
    System.out.print("Data into LRU Cache : ");
    for (Entry<Employee, Boolean> mapEntry : lruCacheLinkedQueue.entrySet()) {
        System.out.print("[" + mapEntry.getKey() + "]");
    }
    System.out.println("");
}

public static void main(String args[]) {
    Employee employee1 = new Employee(1,"Shahbaz",29, getCurrentTimeStamp());
    Employee employee2 = new Employee(2,"Amit",35,getCurrentTimeStamp());
    Employee employee3 = new Employee(3,"viquar",36,getCurrentTimeStamp());
    Employee employee4 = new Employee(4,"Sunny",20,getCurrentTimeStamp());
    Employee employee5 = new Employee(5,"sachin",28,getCurrentTimeStamp());
    Employee employee6 = new Employee(6,"Sneha",25,getCurrentTimeStamp());
    Employee employee7 = new Employee(7,"chantan",19,getCurrentTimeStamp());
    Employee employee8 = new Employee(8,"nitin",22,getCurrentTimeStamp());
    Employee employee9 = new Employee(9,"sanuj",31,getCurrentTimeStamp());
    //
    LRUObjectCacheExample lru = new LRUObjectCacheExample(5);
    lru.addDataIntoCache(employee5);//sachin
    lru.addDataIntoCache(employee4);//Sunny
    lru.addDataIntoCache(employee3);//viquar
    lru.addDataIntoCache(employee2);//Amit
    lru.addDataIntoCache(employee1);//Shahbaz -----capacity reached
    //
        lru.addDataIntoCache(employee6);/Sneha
        lru.addDataIntoCache(employee7);//chantan
        lru.addDataIntoCache(employee8);//nitin
        lru.addDataIntoCache(employee9);//sanuj
        //
        lru.deletePageNo(3);
        lru.deletePageNo(4);

    }
    private static Timestamp getCurrentTimeStamp(){
        return new java.sql.Timestamp(Calendar.getInstance().getTime().getTime());
    }

}

Results:

结果:

**Data into LRU Cache :** 
[Employee [id=1, name=Shahbaz, age=29, loginTime=2015-10-15 18:47:28.1
[Employee [id=6, name=Sneha, age=25, loginTime=2015-10-15 18:47:28.125
[Employee [id=7, name=chantan, age=19, loginTime=2015-10-15 18:47:28.125
[Employee [id=8, name=nitin, age=22, loginTime=2015-10-15 18:47:28.125
[Employee [id=9, name=sanuj, age=31, loginTime=2015-10-15 18:47:28.125

回答by bhushan5640

Extending Peter Lawrey's answer

扩展 Peter Lawrey 的回答

The removeEldestEntry(java.util.Map.Entry)method may be overridden to impose a policy for removing stale mappings automatically when new mappings are added to the map.

removeEldestEntry(java.util.Map.Entry)当新的映射被添加到映射时,该方法可以被覆盖以强加用于自动移除陈旧映射的策略。

So LinkedHashMap's FIFO behavior can be overriden to make LRU

所以可以重写 LinkedHashMap 的 FIFO 行为,使 LRU

public class LRUCache<K, V> extends LinkedHashMap<K, V> {

    private int cacheSize;

    public LRUCache(int cacheSize) {
        super(cacheSize * 4 / 3, 0.75f, true);
        this.cacheSize = cacheSize;
    }

    @Override
    protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
        return size() >= cacheSize;
    }

    public static void main(String args[]) {
        LRUCache<Integer, Integer> lruCache = new LRUCache<>(5);

        lruCache.put(1, 1);
        lruCache.put(2, 2);
        lruCache.put(3, 3);
        lruCache.put(1, 4);
        lruCache.put(2, 5);
        lruCache.put(7, 6);

        System.out.println(lruCache.keySet());

        lruCache.put(1, 4);
        lruCache.put(2, 5);

        System.out.println(lruCache.keySet());
    }
} 

回答by anonymous

class LRU {
    Map<String, Integer> ageMap = new HashMap<String, Integer>();
    int age = 1;

    void addElementWithAge(String element) {

        ageMap.put(element, age);
        age++;

    }

    String getLeastRecent(Map<String, Integer> ageMap) {
        Integer oldestAge = (Integer) ageMap.values().toArray()[0];
        String element = null;
        for (Entry<String, Integer> entry : ageMap.entrySet()) {
            if (oldestAge >= entry.getValue()) {
                oldestAge = entry.getValue();
                element = entry.getKey();
            }
        }
        return element;
    }

    public static void main(String args[]) {
        LRU obj = new LRU();
        obj.addElementWithAge("M1");
        obj.addElementWithAge("M2");
        obj.addElementWithAge("M3");
        obj.addElementWithAge("M4");
        obj.addElementWithAge("M1");
    }

}

回答by ravthiru

LinkedHashMap allows you to override the removeEldestEntry function so that whenever a put is executed, you can specify whether to remove the oldest entry or not, thus allowing implementation of an LRU.

LinkedHashMap 允许您覆盖 removeEldestEntry 函数,这样无论何时执行 put,您都可以指定是否删除最旧的条目,从而允许实现 LRU。

myLRUCache = new LinkedHashMap<Long,String>() {
protected boolean removeEldestEntry(Map.Entry eldest) 
{ 
if(this.size()>1000)
    return true;
  else
      return false;
}
};