generateResponse method
Generates a response from the LLM based on the given prompt.
prompt - The input prompt to send to the model.
context - Optional list of context strings to prepend to the prompt.
Returns the generated text response.
Throws Exception if the request fails.
Implementation
@override
Future<String> generateResponse(String prompt,
{List<String>? context}) async {
final fullPrompt =
context != null ? '${context.join('\n')}\n\n$prompt' : prompt;
final response = await HttpUtils.postWithRetry(
Uri.parse('https://api.openai.com/v1/chat/completions'),
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer $apiKey',
},
body: jsonEncode({
'model': modelName,
'messages': [
{'role': 'user', 'content': fullPrompt}
],
}),
timeout: timeout,
);
if (response.statusCode == 200) {
final data = jsonDecode(response.body);
return data['choices'][0]['message']['content'] as String;
} else {
throw Exception(
'Failed to generate response from OpenAI: ${response.body}');
}
}