CachedLLMProvider class

A wrapper that adds caching capability to any LLM provider.

This decorator pattern allows transparent caching of LLM responses without modifying the underlying provider implementations.

Implemented types

Constructors

CachedLLMProvider({required LLMProvider delegate, required LLMCache cache})
Creates a cached wrapper around an existing provider.

Properties

availableModels List<String>
List of model names available from this provider.
no setteroverride
defaultModel String
The default model to use if none is specified.
no setteroverride
hashCode int
The hash code for this object.
no setterinherited
name String
Human-readable name of this provider (e.g., "OpenAI", "Claude").
no setteroverride
runtimeType Type
A representation of the runtime type of the object.
no setterinherited
stats Map<String, dynamic>
Returns cache statistics for this provider.
no setter

Methods

generateResponse(String prompt, {List<String>? context}) Future<String>
Generates a response from the LLM based on the given prompt.
override
noSuchMethod(Invocation invocation) → dynamic
Invoked when a nonexistent method or property is accessed.
inherited
toString() String
A string representation of this object.
inherited

Operators

operator ==(Object other) bool
The equality operator.
inherited