CreateMessageRequest extension type

A request from the server to sample an LLM via the client.

The client has full discretion over which model to select. The client should also inform the user before beginning sampling, to allow them to inspect the request (human in the loop) and decide whether to approve it.

on
Implemented types

Constructors

CreateMessageRequest.new({required List<SamplingMessage> messages, ModelPreferences? modelPreferences, String? systemPrompt, IncludeContext? includeContext, int? temperature, required int maxTokens, List<String>? stopSequences, Map<String, Object?>? metadata, MetaWithProgressToken? meta})
factory
CreateMessageRequest.fromMap(Map<String, Object?> _value)

Properties

includeContext IncludeContext?
A request to include context from one or more MCP servers (including the caller), to be attached to the prompt.
no setter
maxTokens int
The maximum number of tokens to sample, as requested by the server.
no setter
messages List<SamplingMessage>
The messages to send to the LLM.
no setter
meta MetaWithProgressToken?
If specified, the caller is requesting out-of-band progress notifications for this request (as represented by notifications/progress).
no setterinherited
metadata Map<String, Object?>?
Optional metadata to pass through to the LLM provider.
no setter
modelPreferences ModelPreferences?
The server's preferences for which model to select.
no setter
stopSequences List<String>?
Note: This has no documentation in the specification or schema.
no setter
systemPrompt String?
An optional system prompt the server wants to use for sampling.
no setter
temperature double?
The temperature to use for sampling.
no setter

Constants

methodName → const String