LLMCache class
LRU Cache for LLM responses to reduce API costs and improve response times.
The cache stores responses keyed by a hash of the prompt and context. It supports both in-memory caching and persistent file-based caching.
Constructors
Properties
Methods
-
clear(
) → void - Clears all cached entries.
-
get(
String prompt, String model, {List< String> ? context}) → String? - Gets a cached response if available and not expired.
-
noSuchMethod(
Invocation invocation) → dynamic -
Invoked when a nonexistent method or property is accessed.
inherited
-
put(
String prompt, String model, String response, {List< String> ? context}) → void - Caches a response.
-
toString(
) → String -
A string representation of this object.
inherited
Operators
-
operator ==(
Object other) → bool -
The equality operator.
inherited