- Mastering Objectoriented Python
- Steven F. Lott
- 246字
- 2021-11-12 16:25:18
Using memoization or caching
The idea behind memoization is to cache previous results to avoid recomputing them. We'll use considerably more memory, but we can also dramatically speed up performance by avoiding computation.
An ordinary function doesn't have a place to cache previous results. A function is not expected to be stateful. A callable object, however, can be stateful. It can include a cache of previous results.
The following is a memoized version of our Power
callable object:
class Power5( collections.abc.Callable ): def __init__( self ): self.memo = {} def __call__( self, x, n ): if (x,n) not in self.memo: if n == 0: self.memo[x,n]= 1 elif n % 2 == 1: self.memo[x,n]= self.__call__(x, n-1) * x elif n % 2 == 0: t= self.__call__(x, n//2) self.memo[x,n]= t*t else: raise Exception("Logic Error") return self.memo[x,n] pow5= Power5()
We revised our algorithm to work with the self.memo
cache.
If the value ofhas been requested previously, that result is returned and no computation is performed. This is the big speedup that we spoke of earlier.
Otherwise, the value of must be computed and saved in the memoization cache. The three rules to compute the fast exponent are used to get and put values in the cache. This assures us that future calculations will be able to exploit the cached values.
The importance of memoization can't be stressed enough. The reduction in computation can be dramatic. It is commonly done by replacing a slow, expensive function with a callable object.