generateResponse method

  1. @override
Future<String> generateResponse(
  1. String prompt, {
  2. List<String>? context,
})
override

Generates a response from the LLM based on the given prompt.

prompt - The input prompt to send to the model. context - Optional list of context strings to prepend to the prompt.

Returns the generated text response.

Throws Exception if the request fails.

Implementation

@override
Future<String> generateResponse(
  String prompt, {
  List<String>? context,
}) async {
  // Try cache first
  final cached = _cache.get(prompt, name, context: context);
  if (cached != null) {
    _cacheHits++;
    return cached;
  }

  // Cache miss - call the actual provider
  _cacheMisses++;
  final response = await _delegate.generateResponse(prompt, context: context);

  // Cache the response
  _cache.put(prompt, name, response, context: context);

  return response;
}