Python Lru_cache(来自functools)如何工作?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/49883177/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
How does Lru_cache (from functools) Work?
提问by Yono
Especiallywhen using recursive code there are massive improvements with lru_cache
. I do understand that a cache is a space that stores data that has to be served fast and saves the computer from recomputing.
特别是在使用递归代码时,使用lru_cache
. 我知道缓存是一个空间,用于存储必须快速提供的数据并避免计算机重新计算。
How does the Pythonlru_cache
from functools work internally?
functools中的Pythonlru_cache
是如何在内部工作的?
I'm Looking for a specific answer, does it use dictionaries like the rest of Python? Does it only store the return
value?
我正在寻找一个特定的答案,它是否像 Python 的其余部分一样使用字典?它只存储return
值吗?
I know that Pythonis heavily built on top of dictionaries, however, I couldn't find a specific answer to this question. Hopefully, someone can simplify this answer for all the users on StackOverflow.
我知道Python很大程度上建立在字典之上,但是,我找不到这个问题的具体答案。希望有人可以为StackOverflow上的所有用户简化这个答案。
采纳答案by ndpu
Source of functools is available here: https://github.com/python/cpython/blob/3.6/Lib/functools.py
functools 的来源可在此处获得:https: //github.com/python/cpython/blob/3.6/Lib/functools.py
Lru_cache use _lru_cache_wrapper
decorator (python decorator with arguments pattern) which have cache
dictionary in context(every decorated function have own cache dict) where it saves return value of called function. Dictionary key is generated with _make_key
function according to arguments. Added some bold comments:
Lru_cache 使用_lru_cache_wrapper
装饰器(带有参数模式的python 装饰器),它在上下文中有cache
字典(每个装饰函数都有自己的缓存字典),它保存被调用函数的返回值。字典键是根据参数用函数生成的。添加了一些大胆的评论:_make_key
# ACCORDING TO PASSED maxsize ARGUMENT _lru_cache_wrapper
# DEFINES AND RETURNS ONE OF wrapper DECORATORS
def _lru_cache_wrapper(user_function, maxsize, typed, _CacheInfo):
# Constants shared by all lru cache instances:
sentinel = object() # unique object used to signal cache misses
cache = {} # RESULTS SAVES HERE
cache_get = cache.get # bound method to lookup a key or return None
# ... maxsize is None:
def wrapper(*args, **kwds):
# Simple caching without ordering or size limit
nonlocal hits, misses
key = make_key(args, kwds, typed) # BUILD A KEY FROM ARGUMENTS
result = cache_get(key, sentinel) # TRYING TO GET PREVIOUS CALLS RESULT
if result is not sentinel: # ALREADY CALLED WITH PASSED ARGUMENTS
hits += 1
return result # RETURN SAVED RESULT
# WITHOUT ACTUALLY CALLING FUNCTION
result = user_function(*args, **kwds) # FUNCTION CALL - if cache[key] empty
cache[key] = result # SAVE RESULT
misses += 1
return result
# ...
return wrapper
回答by Bubble Bubble Bubble Gut
You can check out the source code here.
您可以在此处查看源代码。
Essentially it uses two data structures, a dictionarymapping function parameters to its result, and a linked listto keep track of your function call history.
本质上,它使用两种数据结构,一个将函数参数映射到其结果的字典,以及一个用于跟踪函数调用历史记录的链表。
The cache is essentially implemented using the followings, which is pretty self-explanatory.
缓存基本上是使用以下内容实现的,这是不言自明的。
cache = {}
cache_get = cache.get
....
make_key = _make_key # build a key from the function arguments
key = make_key(args, kwds, typed)
result = cache_get(key, sentinel)
The gist of updating the linked list is:
更新链表的要点是:
elif full:
oldroot = root
oldroot[KEY] = key
oldroot[RESULT] = result
# update the linked list to pop out the least recent function call information
root = oldroot[NEXT]
oldkey = root[KEY]
oldresult = root[RESULT]
root[KEY] = root[RESULT] = None
......