java 为什么缓存使用最近使用(MRU)算法作为驱逐策略?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/5088128/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-10-30 09:20:56  来源:igfitidea点击:

Why does cache use Most Recently Used (MRU) algorithm as evict policy?

javaalgorithmcaching

提问by u290629

I know the algorithms of MRU and its reversed one Least Recently Used (LRU).

我知道 MRU 的算法及其反向最近最少使用 (LRU) 的算法。

I think LRU is reasonable, as LRU element means it will be used at least possible in future. However, MRU element means the element is very possible to be used in future, why evict it? What is the reasonable scenario?

我认为 LRU 是合理的,因为 LRU 元素意味着将来至少可能会使用它。但是,MRU元素意味着该元素将来很有可能被使用,为什么要驱逐它?什么是合理的场景?

回答by Jon Skeet

Imagine you were looking up the details of buses as they arrived at a bus stop, based on their bus number (or whatever identifier you use).

想象一下,您正在根据公交车编号(或您使用的任何标识符)查找公交车到达公交车站时的详细信息。

It's somewhat reasonable to think that if you've justseen a number 36 bus, you're lesslikely to see another one imminently than to see one of the other buses that stops there.

这有点有理由认为,如果你只是看到了一些36巴士,你不太可能看到另外一个迫切而不是看到,停在那里的其他总线之一。

Just one example, but the idea is more general: in some cases, having "just seen something" is a good indicator that you're unlikelyto see the same thing again soon.

只是一个例子,但这个想法更笼统:在某些情况下,“刚刚看到某物”是一个很好的指标,表明您不太可能很快再次看到相同的东西。

回答by Kyle Chadha

Perhaps a more tangible example would be a media server. When the user has completed watching a video (let's say it's an episode of a TV show), they are presumably the least likely to want to view it again. So if you must evict something, evict the most recently viewed item.

也许一个更具体的例子是媒体服务器。当用户完成观看视频(假设它是电视节目的一集)时,他们大概最不想再次观看。因此,如果您必须驱逐某些内容,请驱逐最近查看的项目。

In practice though, I believe this type of cache is typically used in addition to an LRU or LFU cache, where the two caches in tandem allow you to cover a wide variety of cases.

但在实践中,我相信这种类型的缓存通常与 LRU 或 LFU 缓存一起使用,其中两个缓存的串联允许您涵盖各种情况。

回答by Jeremiah Willcock

The use case is when you are iterating through the same (larger-than-cache) data multiple times, and so you will not go back to recently accessed data.1

用例是当您多次迭代相同(大于缓存)数据时,因此您不会回到最近访问的数据。1

回答by Stephen C

I think the both @Jon Skeet and @Jeremiah Willcock's answers are describing using MRU as a way to avoid poluting the cache with useless entries.

我认为@Jon Skeet 和@Jeremiah Willcock 的回答都描述了使用 MRU 作为避免无用条目污染缓存的一种方式。

  1. This only works if your cache APIs allow you to change the policy on the fly; e.g. on a per-request basis. Setting your cache policy to MRU in "normal" situations is probably a bad idea ... because your cache becomes ineffective once it fills up.

  2. MRU has the problem that if you get a hit on an entry that is often used in "normal" mode while doing MRU lookups, you end up throwing out the entry ...

  1. 这仅在您的缓存 API 允许您动态更改策略时才有效;例如在每个请求的基础上。在“正常”情况下将您的缓存策略设置为 MRU 可能是一个坏主意……因为一旦填满您的缓存就会变得无效。

  2. MRU 存在的问题是,如果您在执行 MRU 查找时遇到经常在“正常”模式下使用的条目,您最终会丢弃该条目......

Better alternatives to MRU for doing a scan without poluting the cache are:

在不污染缓存的情况下进行扫描的 MRU 更好的替代方案是:

  • bypass the cache entirely,
  • probe the cache without doing a read through / update, and without altering the LRU chains.
  • 完全绕过缓存,
  • 在不进行读取/更新和不改变 LRU 链的情况下探测缓存。


For what it is worth, I cannot think of any use-cases for MRU that don't fit this general pattern.

就其价值而言,我想不出任何不符合这种一般模式的 MRU 用例。



Incidentally, @Jon Skeet's example of buses arrivals is not always borne out in reality due the bunching effect.

顺便说一句,@Jon Skeet 的公交车到达示例由于聚集效应在现实中并不总是得到证实。

  • If a bus is running late, there are likely to be more than average people waiting at each bus stop. The bus has to stop more frequently, and stays longer at each stop. This slows down the late bus.

  • A bus that is on time that is following the late bus will typically have fewer people than the average waiting at each busstop. (Because they just go onto the late bus.) This speeds up the following bus.

  • The net result is that the buses tend to bunch up.

  • 如果公交车晚点,在每个公交车站等待的人可能会超过平均水平。公共汽车必须更频繁地停靠,并且在每个停靠点停留的时间更长。这会减慢晚班车的速度。

  • 一辆跟在晚班车后面的准时公共汽车的人数通常少于在每个公共汽车站等待的平均人数。(因为他们只是上晚点的公共汽车。)这会加快后面的公共汽车的速度。

  • 最终的结果是公共汽车往往会成群结队。

See: https://en.wikipedia.org/wiki/Bus_bunching

请参阅:https: //en.wikipedia.org/wiki/Bus_bunching

回答by user3612388

Lets say you are caching the seats of a hall for a concert, to expedite the booking. As your application books the seats, remove the cached item from the cache as they are no more required for the booking application.

假设您正在为音乐会缓存大厅的座位,以加快预订速度。当您的应用程序预订座位时,从缓存中删除缓存的项目,因为预订应用程序不再需要它们。