a_sync.a_sync.modifiers.cache package
Submodules
a_sync.a_sync.modifiers.cache.memory module
- class a_sync.a_sync.modifiers.cache.memory.CacheKwargs[source]
Bases:
TypedDict
Typed dictionary for cache keyword arguments.
- __getitem__()
x.__getitem__(y) <==> x[y]
- __init__(*args, **kwargs)
- __iter__()
Implement iter(self).
- clear() None. Remove all items from D.
- copy() a shallow copy of D
- fromkeys(value=None, /)
Create a new dictionary with keys from iterable and values set to value.
- get(key, default=None, /)
Return the value for key if key is in the dictionary, else default.
- items() a set-like object providing a view on D's items
- keys() a set-like object providing a view on D's keys
- pop(k[, d]) v, remove specified key and return the corresponding value.
If the key is not found, return the default if given; otherwise, raise a KeyError.
- popitem()
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty.
- setdefault(key, default=None, /)
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
- update([E, ]**F) None. Update D from dict/iterable E and F.
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
- values() an object providing a view on D's values
- a_sync.a_sync.modifiers.cache.memory.apply_async_memory_cache(coro_fn=None, maxsize=None, ttl=None, typed=False)[source]
Applies an asynchronous LRU cache to a coroutine function.
This function uses the alru_cache from the async_lru library to cache the results of an asynchronous coroutine function. The cache can be configured with a maximum size, a time-to-live (TTL), and whether the cache keys should be typed.
- Parameters:
coro_fn (Callable[[~P], Awaitable[T]] | int | None) – The coroutine function to be cached, or an integer to set as maxsize.
maxsize (int | None) – The maximum size of the cache. If set to -1, it is converted to None, making the cache unbounded.
ttl (int | None) – The time-to-live for cache entries in seconds. If None, entries do not expire.
typed (bool) – Whether to consider the types of arguments as part of the cache key.
- Raises:
TypeError – If maxsize is not a positive integer or None when coro_fn is None.
exceptions.FunctionNotAsync – If coro_fn is not an asynchronous function.
- Return type:
Callable[[Callable[[~P], Awaitable[T]]], Callable[[~P], Awaitable[T]]] | Callable[[~P], Awaitable[T]]
Examples
>>> @apply_async_memory_cache(maxsize=128, ttl=60) ... async def fetch_data(): ... pass
>>> async def fetch_data(): ... pass >>> cached_fetch = apply_async_memory_cache(fetch_data, maxsize=128, ttl=60)
See also
alru_cache()
for the underlying caching mechanism.
Module contents
- class a_sync.a_sync.modifiers.cache.CacheArgs[source]
Bases:
TypedDict
Typed dictionary for cache arguments.
- __getitem__()
x.__getitem__(y) <==> x[y]
- __init__(*args, **kwargs)
- __iter__()
Implement iter(self).
- clear() None. Remove all items from D.
- copy() a shallow copy of D
- fromkeys(value=None, /)
Create a new dictionary with keys from iterable and values set to value.
- get(key, default=None, /)
Return the value for key if key is in the dictionary, else default.
- items() a set-like object providing a view on D's items
- keys() a set-like object providing a view on D's keys
- pop(k[, d]) v, remove specified key and return the corresponding value.
If the key is not found, return the default if given; otherwise, raise a KeyError.
- popitem()
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty.
- setdefault(key, default=None, /)
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
- update([E, ]**F) None. Update D from dict/iterable E and F.
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
- values() an object providing a view on D's values
- a_sync.a_sync.modifiers.cache.apply_async_cache(coro_fn=None, cache_type='memory', cache_typed=False, ram_cache_maxsize=None, ram_cache_ttl=None)[source]
Applies an asynchronous cache to a coroutine function.
This function can be used to apply a cache to a coroutine function, either by passing the function directly or by using it as a decorator. The cache type is currently limited to ‘memory’. If an unsupported cache type is specified, a NotImplementedError is raised.
- Parameters:
coro_fn (Callable[[~P], Awaitable[T]] | Literal['memory', None] | int | None) – The coroutine function to apply the cache to, or an integer to set as the max size.
cache_type (Literal['memory', None]) – The type of cache to use. Currently, only ‘memory’ is implemented.
cache_typed (bool) – Whether to consider types for cache keys.
ram_cache_maxsize (int | None) – The maximum size for the LRU cache. If set to an integer, it overrides coro_fn.
ram_cache_ttl (int | None) – The time-to-live for items in the LRU cache.
- Raises:
TypeError – If ‘ram_cache_maxsize’ is not an integer or None.
FunctionNotAsync – If the provided function is not asynchronous.
NotImplementedError – If an unsupported cache type is specified.
- Return type:
Callable[[Callable[[~P], Awaitable[T]]], Callable[[~P], Awaitable[T]]] | Callable[[~P], Awaitable[T]]
Examples
>>> @apply_async_cache(cache_type='memory', ram_cache_maxsize=100) ... async def my_function(): ... pass
>>> async def my_function(): ... pass >>> cached_function = apply_async_cache(my_function, cache_type='memory', ram_cache_maxsize=100)
>>> @apply_async_cache(100, cache_type='memory') ... async def another_function(): ... pass
See also
apply_async_memory_cache()