chat method
Stream<CompletionChunk>
chat(
- List<
ChatMessage> messages, { - required String model,
- String? format,
- String? template,
- ModelOptions? options,
- bool chunked = true,
Generate the next message in a chat with a provided model.
This is a streaming endpoint, so there will be a series of responses. The final response object will include statistics and additional data from the request.
messages
is a list of ChatMessages that have been sent so far.
model
is the model to use for the response.
format
is the format to use for the response.
template
is the template to use for the response.
options
is the options to use for the response.
chunked
is whether the response should be streamed as it is generated.
Implementation
Stream<CompletionChunk> chat(
List<ChatMessage> messages, {
required String model,
String? format,
String? template,
ModelOptions? options,
bool chunked = true,
}) async* {
final url = baseUrl.resolve('api/chat');
// Open a POST request to a server and send the request body.
final request = await _client.postUrl(url);
request.headers.contentType = ContentType.json;
request.write(jsonEncode({
'messages': [for (final message in messages) message.toJson()],
'model': model,
'stream': chunked,
'format': format,
'template': template,
'options': options?.toJson(),
}));
final response = await request.close();
await for (final chunk in response.transform(utf8.decoder)) {
final json = jsonDecode(chunk);
yield CompletionChunk.fromJson(json);
}
}