ChatAnthropic class
Wrapper around Anthropic Messages API (aka Claude API).
Example:
final chatModel = ChatAnthropic(apiKey: '...');
final messages = [
ChatMessage.system('You are a helpful assistant that translates English to French.'),
ChatMessage.humanText('I love programming.'),
];
final prompt = PromptValue.chat(messages);
final res = await llm.invoke(prompt);
Authentication
The Anthropic API uses API keys for authentication. Visit your API Keys page to retrieve the API key you'll use in your requests.
Available models
The following models are available:
claude-3-5-sonnet-20240620
claude-3-haiku-20240307
claude-3-opus-20240229
claude-3-sonnet-20240229
claude-2.0
claude-2.1
Mind that the list may not be up-to-date. See https://docs.anthropic.com/en/docs/about-claude/models for the updated list.
Call options
You can configure the parameters that will be used when calling the chat completions API in several ways:
Default options:
Use the defaultOptions
parameter to set the default options. These
options will be used unless you override them when generating completions.
final chatModel = ChatAnthropic(
apiKey: anthropicApiKey,
defaultOptions: const ChatAnthropicOptions(
temperature: 0.9,
maxTokens: 100,
),
);
Call options:
You can override the default options when invoking the model:
final res = await chatModel.invoke(
prompt,
options: const ChatAnthropicOptions(temperature: 0.5),
);
Bind:
You can also change the options in a Runnable
pipeline using the bind
method.
In this example, we are using two totally different models for each question:
final chatModel = ChatAnthropic(apiKey: anthropicApiKey);
const outputParser = StringOutputParser();
final prompt1 = PromptTemplate.fromTemplate('How are you {name}?');
final prompt2 = PromptTemplate.fromTemplate('How old are you {name}?');
final chain = Runnable.fromMap({
'q1': prompt1 | chatModel.bind(const ChatAnthropicOptions(model: 'claude-3-5-sonnet-20241022)) | outputParser,
'q2': prompt2 | chatModel.bind(const ChatAnthropicOptions(model: 'claude-3-sonnet-20240229)) | outputParser,
});
final res = await chain.invoke({'name': 'David'});
Advance
Custom HTTP client
You can always provide your own implementation of http.Client
for further
customization:
final client = ChatAnthropic(
apiKey: 'ANTHROPIC_API_KEY',
client: MyHttpClient(),
);
Using a proxy
HTTP proxy
You can use your own HTTP proxy by overriding the baseUrl
and providing
your required headers
:
final client = ChatAnthropic(
baseUrl: 'https://my-proxy.com',
headers: {'x-my-proxy-header': 'value'},
);
If you need further customization, you can always provide your own
http.Client
.
SOCKS5 proxy
To use a SOCKS5 proxy, you can use the
socks5_proxy
package and a
custom http.Client
.
Constructors
-
ChatAnthropic.new({String? apiKey, String baseUrl = 'https://api.anthropic.com/v1', Map<
String, String> ? headers, Map<String, dynamic> ? queryParams, Client? client, ChatAnthropicOptions defaultOptions = const ChatAnthropicOptions(model: defaultModel, maxTokens: defaultMaxTokens), String encoding = 'cl100k_base'}) - Create a new ChatAnthropic instance.
Properties
- defaultOptions → ChatAnthropicOptions
-
The default options to use when invoking the
Runnable
.finalinherited - encoding ↔ String
-
The encoding to use by tiktoken when tokenize is called.
getter/setter pair
- hashCode → int
-
The hash code for this object.
no setterinherited
- modelType → String
-
Return type of language model.
no setter
- runtimeType → Type
-
A representation of the runtime type of the object.
no setterinherited
Methods
-
batch(
List< PromptValue> inputs, {List<ChatAnthropicOptions> ? options}) → Future<List< ChatResult> > -
Batches the invocation of the
Runnable
on the giveninputs
.inherited -
bind(
ChatAnthropicOptions options) → RunnableBinding< PromptValue, ChatAnthropicOptions, ChatResult> -
Binds the
Runnable
to the givenoptions
.inherited -
call(
List< ChatMessage> messages, {ChatAnthropicOptions? options}) → Future<AIChatMessage> -
Runs the chat model on the given messages and returns a chat message.
inherited
-
close(
) → void -
Cleans up any resources associated with it the
Runnable
. -
countTokens(
PromptValue promptValue, {ChatAnthropicOptions? options}) → Future< int> -
Returns the number of tokens resulting from
tokenize
the given prompt.inherited -
getCompatibleOptions(
RunnableOptions? options) → ChatAnthropicOptions? -
Returns the given
options
if they are compatible with theRunnable
, otherwise returnsnull
.inherited -
invoke(
PromptValue input, {ChatAnthropicOptions? options}) → Future< ChatResult> -
Invokes the
Runnable
on the giveninput
. -
noSuchMethod(
Invocation invocation) → dynamic -
Invoked when a nonexistent method or property is accessed.
inherited
-
pipe<
NewRunOutput extends Object?, NewCallOptions extends RunnableOptions> (Runnable< ChatResult, NewCallOptions, NewRunOutput> next) → RunnableSequence<PromptValue, NewRunOutput> -
Pipes the output of this
Runnable
into anotherRunnable
using aRunnableSequence
.inherited -
stream(
PromptValue input, {ChatAnthropicOptions? options}) → Stream< ChatResult> -
Streams the output of invoking the
Runnable
on the giveninput
. -
streamFromInputStream(
Stream< PromptValue> inputStream, {ChatAnthropicOptions? options}) → Stream<ChatResult> -
Streams the output of invoking the
Runnable
on the giveninputStream
.inherited -
tokenize(
PromptValue promptValue, {ChatAnthropicOptions? options}) → Future< List< int> > - Tokenizes the given prompt using tiktoken.
-
toString(
) → String -
A string representation of this object.
inherited
-
withFallbacks(
List< Runnable< fallbacks) → RunnableWithFallback<PromptValue, RunnableOptions, ChatResult> >PromptValue, ChatResult> -
Adds fallback runnables to be invoked if the primary runnable fails.
inherited
-
withRetry(
{int maxRetries = 3, FutureOr< bool> retryIf(Object e)?, List<Duration?> ? delayDurations, bool addJitter = false}) → RunnableRetry<PromptValue, ChatResult> -
Adds retry logic to an existing runnable.
inherited
Operators
-
operator ==(
Object other) → bool -
The equality operator.
inherited
Constants
- defaultMaxTokens → const int
- The default max tokens to use unless another is specified.
- defaultModel → const String
- The default model to use unless another is specified.