LLMCache class

LRU Cache for LLM responses to reduce API costs and improve response times.

The cache stores responses keyed by a hash of the prompt and context. It supports both in-memory caching and persistent file-based caching.

Constructors

LLMCache({int maxEntries = _defaultMaxEntries, Duration ttl = _defaultTtl, String? persistPath})
Creates a new LLM cache.

Properties

hashCode int
The hash code for this object.
no setterinherited
maxEntries int
final
runtimeType Type
A representation of the runtime type of the object.
no setterinherited
stats Map<String, dynamic>
Returns cache statistics.
no setter
ttl Duration
final

Methods

clear() → void
Clears all cached entries.
get(String prompt, String model, {List<String>? context}) String?
Gets a cached response if available and not expired.
noSuchMethod(Invocation invocation) → dynamic
Invoked when a nonexistent method or property is accessed.
inherited
put(String prompt, String model, String response, {List<String>? context}) → void
Caches a response.
toString() String
A string representation of this object.
inherited

Operators

operator ==(Object other) bool
The equality operator.
inherited