Home > Blockchain >  Difference between functool's cache and lru_cache
Difference between functool's cache and lru_cache

Time:12-21

Recently I came across functools.cache and didn't know how it differs from functools.lru_cache.

I found posts about the difference between functools.cached_property and lru_cache but nothing specifically for cache and lru_cache.

CodePudding user response:

functools.cache was newly added in version 3.9.

The documentation states:

Simple lightweight unbounded function cache. Sometimes called “memoize”.

Returns the same as lru_cache(maxsize=None), creating a thin wrapper around a dictionary lookup for the function arguments. Because it never needs to evict old values, this is smaller and faster than lru_cache() with a size limit.

Example from the docs:

@cache
def factorial(n):
    return n * factorial(n-1) if n else 1

>>> factorial(10)      # no previously cached result, makes 11 recursive calls
3628800
>>> factorial(5)       # just looks up cached value result
120
>>> factorial(12)      # makes two new recursive calls, the other 10 are cached
479001600

So, in short: cache and lru_cache(maxsize=None) are exactly the same (link to cpython source). But in cases where you don't want to limit the cache size, using cache may make the code clearer, since a least recently used cache without limit doesn't make much sense.

  • Related