C++ 生产代码中的 LRU 实现
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/2057424/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
LRU implementation in production code
提问by sud03r
I have some C++ code where I need to implement cache replacement using LRU technique.
So far I know two methods to implement LRU cache replacement:
我有一些 C++ 代码,需要使用 LRU 技术实现缓存替换。
到目前为止,我知道两种实现 LRU 缓存替换的方法:
- Using timeStamp for each time the cached data is accessed and finally comparing the timeStamps at time of replacement.
- Using a stack of cached items and moving them to the top if they are accessed recently, so finally the bottom will contain the LRU Candidate.
- 每次访问缓存数据时都使用时间戳,并最终比较替换时的时间戳。
- 使用一堆缓存项并将它们移到顶部,如果它们最近被访问过,那么最后底部将包含 LRU Candidate。
So, which of these is better to be used in production code?
Are their any other better methods?
那么,这些中哪一个更适合用于生产代码?
他们还有其他更好的方法吗?
回答by Kornel Kisielewicz
Recently I implemented a LRU cache using a linked list spread over a hash map.
最近我使用散列映射上的链表实现了一个 LRU 缓存。
/// Typedef for URL/Entry pair
typedef std::pair< std::string, Entry > EntryPair;
/// Typedef for Cache list
typedef std::list< EntryPair > CacheList;
/// Typedef for URL-indexed map into the CacheList
typedef boost::unordered_map< std::string, CacheList::iterator > CacheMap;
/// Cache LRU list
CacheList mCacheList;
/// Cache map into the list
CacheMap mCacheMap;
It has the advantage of being O(1) for all important operations.
它的优点是所有重要操作都是 O(1)。
The insertion algorithm:
插入算法:
// create new entry
Entry iEntry( ... );
// push it to the front;
mCacheList.push_front( std::make_pair( aURL, iEntry ) );
// add it to the cache map
mCacheMap[ aURL ] = mCacheList.begin();
// increase count of entries
mEntries++;
// check if it's time to remove the last element
if ( mEntries > mMaxEntries )
{
// erease from the map the last cache list element
mCacheMap.erase( mCacheList.back().first );
// erase it from the list
mCacheList.pop_back();
// decrease count
mEntries--;
}
回答by Alexander Ponomarev
Here is a very simple implementation of LRU cache
下面是一个非常简单的 LRU 缓存实现
https://github.com/lamerman/cpp-lru-cache.
https://github.com/lamerman/cpp-lru-cache。
It's easy to use and understand how it works. The total size of code is about 50 lines.
它易于使用并了解其工作原理。代码的总大小约为 50 行。
回答by William Wong
For simplicity, maybe you should consider using Boost's MultiIndex map. If we separate the key from the data, we support multiple sets of keys on the same data.
为简单起见,也许您应该考虑使用 Boost 的 MultiIndex 映射。如果我们将键从数据中分离出来,我们就支持同一数据上的多组键。
"...use two indexes: 1) hashed for searching value by key 2) sequential for tracking last recently used items (get function put item as last item in sequesnce. If we need to remove some items from cache, we may delete they from begin of sequence)."
“...使用两个索引:1)散列用于按键搜索值 2)顺序跟踪最近使用的项目(获取函数将项目作为序列中的最后一个项目。如果我们需要从缓存中删除一些项目,我们可能会删除它们从序列开始)。”
Note that the "project" operator "allows the programmer to move between different indices of the same multi_index_container" efficiently.
请注意,“项目”运算符“允许程序员在同一 multi_index_container 的不同索引之间有效移动”。
回答by timday
回答by grokus
In our production environment we use a C++ double linked list which is similar to the Linux kernel linked list. The beauty of it is that you can add an object to as many linked lists as you want and list operation is fast and simple.
在我们的生产环境中,我们使用类似于Linux 内核链表的 C++ 双链表。它的美妙之处在于您可以将一个对象添加到任意数量的链表中,并且列表操作既快速又简单。