ChatGoogleGenerativeAI class
Wrapper around Google AI for Developers API (aka Gemini API).
Example:
final chatModel = ChatGoogleGenerativeAI(apiKey: '...');
final messages = [
ChatMessage.humanText('Tell me a joke.'),
];
final prompt = PromptValue.chat(messages);
final res = await llm.invoke(prompt);
Setup
To use ChatGoogleGenerativeAI
you need to have an API key.
You can get one here.
Available models
The following models are available:
gemini-1.5-flash
:- text / image / audio -> text model
- Max input token: 1048576
- Max output tokens: 8192
gemini-1.5-pro
:- text / image / audio -> text model
- Max input token: 2097152
- Max output tokens: 8192
gemini-1.0-pro
(orgemini-pro
):- text -> text model
- Max input token: 32760
- Max output tokens: 8192
aqa
:- text -> text model
- Max input token: 7168
- Max output tokens: 1024
Mind that this list may not be up-to-date. Refer to the documentation for the updated list.
Tuned models
You can specify a tuned model by setting the model
parameter to
tunedModels/{your-model-name}
. For example:
final chatModel = ChatGoogleGenerativeAI(
defaultOptions: ChatGoogleGenerativeAIOptions(
model: 'tunedModels/my-tuned-model',
),
);
Call options
You can configure the parameters that will be used when calling the chat completions API in several ways:
Default options:
Use the defaultOptions
parameter to set the default options. These
options will be used unless you override them when generating completions.
final chatModel = ChatGoogleGenerativeAI(
defaultOptions: ChatGoogleGenerativeAIOptions(
model: 'gemini-pro-vision',
temperature: 0,
),
);
Call options:
You can override the default options when invoking the model:
final res = await chatModel.invoke(
prompt,
options: const ChatGoogleGenerativeAIOptions(temperature: 1),
);
Bind:
You can also change the options in a Runnable
pipeline using the bind
method.
In this example, we are using two totally different models for each question:
final chatModel = ChatGoogleGenerativeAI(apiKey: '...');
const outputParser = StringOutputParser();
final prompt1 = PromptTemplate.fromTemplate('How are you {name}?');
final prompt2 = PromptTemplate.fromTemplate('How old are you {name}?');
final chain = Runnable.fromMap({
'q1': prompt1 | chatModel.bind(const ChatGoogleGenerativeAIOptions(model: 'gemini-pro')) | outputParser,
'q2': prompt2 | chatModel.bind(const ChatGoogleGenerativeAIOptions(model: 'gemini-pro-vision')) | outputParser,
});
final res = await chain.invoke({'name': 'David'});
Tool calling
ChatGoogleGenerativeAI supports tool calling.
Check the docs for more information on how to use tools.
Example:
const tool = ToolSpec(
name: 'get_current_weather',
description: 'Get the current weather in a given location',
inputJsonSchema: {
'type': 'object',
'properties': {
'location': {
'type': 'string',
'description': 'The city and state, e.g. San Francisco, CA',
},
},
'required': ['location'],
},
);
final chatModel = ChatGoogleGenerativeAI(
defaultOptions: ChatGoogleGenerativeAIOptions(
model: 'gemini-1.5-pro-latest',
temperature: 0,
tools: [tool],
),
);
final res = await model.invoke(
PromptValue.string('What’s the weather like in Boston and Madrid right now in celsius?'),
);
Advance
Custom HTTP client
You can always provide your own implementation of http.Client
for further
customization:
final client = ChatGoogleGenerativeAI(
apiKey: 'GOOGLE_AI_API_KEY',
client: MyHttpClient(),
);
Using a proxy
HTTP proxy
You can use your own HTTP proxy by overriding the baseUrl
and providing
your required headers
:
final client = ChatGoogleGenerativeAI(
baseUrl: 'https://my-proxy.com',
headers: {'x-my-proxy-header': 'value'},
queryParams: {'x-my-proxy-query-param': 'value'},
);
If you need further customization, you can always provide your own
http.Client
.
SOCKS5 proxy
To use a SOCKS5 proxy, you can use the
socks5_proxy
package and a
custom http.Client
.
Constructors
-
ChatGoogleGenerativeAI.new({String? apiKey, String? baseUrl, Map<
String, String> ? headers, Map<String, dynamic> ? queryParams, int retries = 3, Client? client, ChatGoogleGenerativeAIOptions defaultOptions = const ChatGoogleGenerativeAIOptions(model: defaultModel)}) - Create a new ChatGoogleGenerativeAI instance.
Properties
- apiKey ↔ String
-
Get the API key.
getter/setter pair
- defaultOptions → ChatGoogleGenerativeAIOptions
-
The default options to use when invoking the
Runnable
.finalinherited - hashCode → int
-
The hash code for this object.
no setterinherited
- modelType → String
-
Return type of language model.
no setter
- runtimeType → Type
-
A representation of the runtime type of the object.
no setterinherited
Methods
-
batch(
List< PromptValue> inputs, {List<ChatGoogleGenerativeAIOptions> ? options}) → Future<List< ChatResult> > -
Batches the invocation of the
Runnable
on the giveninputs
.inherited -
bind(
ChatGoogleGenerativeAIOptions options) → RunnableBinding< PromptValue, ChatGoogleGenerativeAIOptions, ChatResult> -
Binds the
Runnable
to the givenoptions
.inherited -
call(
List< ChatMessage> messages, {ChatGoogleGenerativeAIOptions? options}) → Future<AIChatMessage> -
Runs the chat model on the given messages and returns a chat message.
inherited
-
close(
) → void -
Cleans up any resources associated with it the
Runnable
. -
countTokens(
PromptValue promptValue, {ChatGoogleGenerativeAIOptions? options}) → Future< int> - Returns the number of tokens resulting from tokenize the given prompt.
-
getCompatibleOptions(
RunnableOptions? options) → ChatGoogleGenerativeAIOptions? -
Returns the given
options
if they are compatible with theRunnable
, otherwise returnsnull
.inherited -
invoke(
PromptValue input, {ChatGoogleGenerativeAIOptions? options}) → Future< ChatResult> -
Invokes the
Runnable
on the giveninput
. -
noSuchMethod(
Invocation invocation) → dynamic -
Invoked when a nonexistent method or property is accessed.
inherited
-
pipe<
NewRunOutput extends Object?, NewCallOptions extends RunnableOptions> (Runnable< ChatResult, NewCallOptions, NewRunOutput> next) → RunnableSequence<PromptValue, NewRunOutput> -
Pipes the output of this
Runnable
into anotherRunnable
using aRunnableSequence
.inherited -
stream(
PromptValue input, {ChatGoogleGenerativeAIOptions? options}) → Stream< ChatResult> -
Streams the output of invoking the
Runnable
on the giveninput
. -
streamFromInputStream(
Stream< PromptValue> inputStream, {ChatGoogleGenerativeAIOptions? options}) → Stream<ChatResult> -
Streams the output of invoking the
Runnable
on the giveninputStream
.inherited -
tokenize(
PromptValue promptValue, {ChatGoogleGenerativeAIOptions? options}) → Future< List< int> > - Tokenizes the given prompt using the encoding used by the language model.
-
toString(
) → String -
A string representation of this object.
inherited
-
withFallbacks(
List< Runnable< fallbacks) → RunnableWithFallback<PromptValue, RunnableOptions, ChatResult> >PromptValue, ChatResult> -
Adds fallback runnables to be invoked if the primary runnable fails.
inherited
-
withRetry(
{int maxRetries = 3, FutureOr< bool> retryIf(Object e)?, List<Duration?> ? delayDurations, bool addJitter = false}) → RunnableRetry<PromptValue, ChatResult> -
Adds retry logic to an existing runnable.
inherited
Operators
-
operator ==(
Object other) → bool -
The equality operator.
inherited
Constants
- defaultModel → const String
- The default model to use unless another is specified.