ChatMistralAI class

Wrapper around Mistral AI Chat Completions API.

Mistral AI brings the strongest open generative models to the developers, along with efficient ways to deploy and customise them for production.

Example:

final chatModel = ChatMistralAI(apiKey: '...');
final messages = [
  ChatMessage.system('You are a helpful assistant that translates English to French.'),
  ChatMessage.humanText('I love programming.'),
];
final prompt = PromptValue.chat(messages);
final res = await llm.invoke(prompt);

Setup

To use ChatMistralAI you need to have a Mistral AI account and an API key. You can get one here.

Available models

The following models are available at the moment:

  • mistral-tiny: Mistral 7B Instruct v0.2 (a minor release of Mistral 7B Instruct). It only works in English and obtains 7.6 on MT-Bench.
  • mistral-small: Mixtral 8x7B. It masters English/French/Italian/German/Spanish and code and obtains 8.3 on MT-Bench.
  • mistral-medium: a prototype model, that is currently among the top serviced models available based on standard benchmarks. It masters English/French/Italian/German/Spanish and code and obtains a score of 8.6 on MT-Bench.

Mind that this list may not be up-to-date. Refer to the documentation for the updated list.

Call options

You can configure the parameters that will be used when calling the chat completions API in several ways:

Default options:

Use the defaultOptions parameter to set the default options. These options will be used unless you override them when generating completions.

final chatModel = ChatMistralAI(
  defaultOptions: const ChatMistralAIOptions(
    model: 'mistral-medium',
    temperature: 0,
  ),
);

Call options:

You can override the default options when invoking the model:

final res = await chatModel.invoke(
  prompt,
  options: const ChatMistralAIOptions(randomSeed: 9999),
);

Bind:

You can also change the options in a Runnable pipeline using the bind method.

In this example, we are using two totally different models for each question:

final chatModel = ChatMistralAI(apiKey: '...');
const outputParser = StringOutputParser();
final prompt1 = PromptTemplate.fromTemplate('How are you {name}?');
final prompt2 = PromptTemplate.fromTemplate('How old are you {name}?');
final chain = Runnable.fromMap({
  'q1': prompt1 | chatModel.bind(const ChatMistralAIOptions(model: 'mistral-tiny')) | outputParser,
  'q2': prompt2| chatModel.bind(const ChatMistralAIOptions(model: 'mistral-medium')) | outputParser,
});
final res = await chain.invoke({'name': 'David'});

Advance

Custom HTTP client

You can always provide your own implementation of http.Client for further customization:

final client = ChatMistralAI(
  apiKey: 'MISTRAL_AI_API_KEY',
  client: MyHttpClient(),
);

Using a proxy

HTTP proxy

You can use your own HTTP proxy by overriding the baseUrl and providing your required headers:

final client = ChatMistralAI(
  baseUrl: 'https://my-proxy.com',
  headers: {'x-my-proxy-header': 'value'},
  queryParams: {'x-my-proxy-query-param': 'value'},
);

If you need further customization, you can always provide your own http.Client.

SOCKS5 proxy

To use a SOCKS5 proxy, you can use the socks5_proxy package and a custom http.Client.

Constructors

ChatMistralAI.new({String? apiKey, String baseUrl = 'https://api.mistral.ai/v1', Map<String, String>? headers, Map<String, dynamic>? queryParams, Client? client, ChatMistralAIOptions defaultOptions = const ChatMistralAIOptions(model: defaultModel), String encoding = 'cl100k_base'})
Create a new ChatMistralAI instance.

Properties

defaultOptions ChatMistralAIOptions
The default options to use when invoking the Runnable.
finalinherited
encoding String
The encoding to use by tiktoken when tokenize is called.
getter/setter pair
hashCode int
The hash code for this object.
no setterinherited
modelType String
Return type of language model.
no setter
runtimeType Type
A representation of the runtime type of the object.
no setterinherited

Methods

batch(List<PromptValue> inputs, {List<ChatMistralAIOptions>? options}) Future<List<ChatResult>>
Batches the invocation of the Runnable on the given inputs.
inherited
bind(ChatMistralAIOptions options) → RunnableBinding<PromptValue, ChatMistralAIOptions, ChatResult>
Binds the Runnable to the given options.
inherited
call(List<ChatMessage> messages, {ChatMistralAIOptions? options}) Future<AIChatMessage>
Runs the chat model on the given messages and returns a chat message.
inherited
close() → void
Cleans up any resources associated with it the Runnable.
countTokens(PromptValue promptValue, {ChatMistralAIOptions? options}) Future<int>
Returns the number of tokens resulting from tokenize the given prompt.
inherited
getCompatibleOptions(RunnableOptions? options) ChatMistralAIOptions?
Returns the given options if they are compatible with the Runnable, otherwise returns null.
inherited
invoke(PromptValue input, {ChatMistralAIOptions? options}) Future<ChatResult>
Invokes the Runnable on the given input.
noSuchMethod(Invocation invocation) → dynamic
Invoked when a nonexistent method or property is accessed.
inherited
pipe<NewRunOutput extends Object?, NewCallOptions extends RunnableOptions>(Runnable<ChatResult, NewCallOptions, NewRunOutput> next) → RunnableSequence<PromptValue, NewRunOutput>
Pipes the output of this Runnable into another Runnable using a RunnableSequence.
inherited
stream(PromptValue input, {ChatMistralAIOptions? options}) Stream<ChatResult>
Streams the output of invoking the Runnable on the given input.
streamFromInputStream(Stream<PromptValue> inputStream, {ChatMistralAIOptions? options}) Stream<ChatResult>
Streams the output of invoking the Runnable on the given inputStream.
inherited
tokenize(PromptValue promptValue, {ChatMistralAIOptions? options}) Future<List<int>>
Tokenizes the given prompt using tiktoken.
toString() String
A string representation of this object.
inherited
withFallbacks(List<Runnable<PromptValue, RunnableOptions, ChatResult>> fallbacks) → RunnableWithFallback<PromptValue, ChatResult>
Adds fallback runnables to be invoked if the primary runnable fails.
inherited
withRetry({int maxRetries = 3, FutureOr<bool> retryIf(Object e)?, List<Duration?>? delayDurations, bool addJitter = false}) → RunnableRetry<PromptValue, ChatResult>
Adds retry logic to an existing runnable.
inherited

Operators

operator ==(Object other) bool
The equality operator.
inherited

Constants

defaultModel → const String
The default model to use unless another is specified.