yls_agi_sdk_dart
Flutter/Dart SDK for the YLS AGI gateway, bridged from yls-agi-rust-sdk with flutter_rust_bridge.
It provides:
- Unified OpenAI, Gemini, and Claude chat APIs
- Streaming chat responses
- Multimodal inputs with text, image URL, and base64 image parts
- OpenAI
gpt-image-2image generation and reference-image editing - Gemini image generation and reference-image editing
- High-level provider facades and official-SDK-style namespaces
- Access to the generated low-level FRB bindings when needed
Install
dependencies:
yls_agi_sdk_dart: ^0.1.0
Initialize
import 'package:flutter/widgets.dart';
import 'package:yls_agi_sdk_dart/yls_agi_sdk_dart.dart';
Future<void> main() async {
WidgetsFlutterBinding.ensureInitialized();
await YlsAgi.init();
}
Create Config
final config = YlsConfigFactory.ylsAgi(
apiKey: 'your-api-key',
chatgptImageApiKey: 'your-codex-key',
proxy: YlsConfigFactory.noProxy(),
);
You can also use YlsConfigFactory.gateway(...) if you need custom base URLs or auth modes.
Quick Start
final openai = YlsAgi.openai(config);
final response = await openai
.request(YlsModels.openai.gpt41)
.system('You are a concise assistant.')
.user('用一句话介绍 Rust')
.generationOptions(
temperature: 0.2,
maxTokens: 256,
)
.chat();
print(response.contentText);
Official-SDK-Style APIs
responses.create(...)
final openai = YlsAgi.openai(config);
final response = await openai.responses.create(
input: 'Explain Rust ownership in one sentence.',
system: 'You are a concise assistant.',
options: YlsOptionPresets.precise(maxTokens: 128),
);
print(response.contentText);
responses.stream(...)
final gemini = YlsAgi.gemini(config);
await for (final chunk in gemini.responses.stream(
input: '请流式输出一段简短介绍',
options: YlsOptionPresets.balanced(),
)) {
print(chunk.textDelta);
if (chunk.isDone) break;
}
chat.completions.create(...)
final claude = YlsAgi.claude(config);
final response = await claude.chat.completions.create(
messages: [
YlsMessageBuilder.systemText('You are a concise assistant.'),
YlsMessageBuilder.userText('Summarize Dart isolates in one sentence.'),
],
options: YlsOptionPresets.precise(),
);
chat.completions.stream(...)
await for (final chunk in openai.chat.completions.stream(
messages: [
YlsMessageBuilder.userText('Stream a short product description.'),
],
)) {
print(chunk.textDelta);
}
Provider Facades
final openai = YlsAgi.openai(config);
final gemini = YlsAgi.gemini(config);
final claude = YlsAgi.claude(config);
These facades bind the provider up front, so you do not need to pass Provider.openAi, Provider.gemini, or Provider.claude on every call.
Chain-Style Builders
final request = openai
.request()
.system('You are a concise assistant.')
.user('Say hello in Chinese.')
.generationOptions(
temperature: 0.2,
maxTokens: 64,
)
.build();
final response = await openai.chat(request);
You can also send directly from the builder:
final response = await openai
.request()
.system('You are a concise assistant.')
.user('Say hello in Chinese.')
.chat();
Multimodal Input
final imageMessage = YlsMessageBuilder
.userText('描述这张图片')
.addImageBase64(
mimeType: YlsMimeTypes.png,
dataBase64: imageBase64,
);
final response = await YlsAgi.gemini(config).chat.completions.create(
model: YlsModels.gemini.gemini3ProPreview,
messages: [imageMessage],
);
You can also use responses.create(inputParts: [...]):
final response = await YlsAgi.gemini(config).responses.create(
inputParts: [
YlsMessageBuilder.text('描述这张图片'),
YlsMessageBuilder.imageBase64(
mimeType: YlsMimeTypes.png,
dataBase64: imageBase64,
),
],
);
Gemini Image Generation
final gemini = YlsAgi.gemini(config);
final imageResponse = await gemini.images.generate(
prompt: 'Create a photorealistic orange cat wearing sunglasses.',
options: YlsOptionPresets.image(),
);
final firstImage = imageResponse.images.first;
print(firstImage.mimeType);
await firstImage.saveToFile('/tmp/cat.png');
OpenAI gpt-image-2 Image Generation
final openai = YlsAgi.openai(config);
final imageResponse = await openai.images.generate(
prompt: '一个打磨完整的 2D 塔防炮台,透明背景。',
);
await imageResponse.image.saveToFile('/tmp/turret.png');
By default, the high-level layer uses:
- outer model:
YlsDefaultModels.openaiImageOuter - image model:
YlsDefaultModels.openaiImageModel
You can override either of them:
final imageResponse = await openai.images.generate(
model: YlsModels.openai.gpt54,
imageModel: YlsModels.openai.gptImage2,
prompt: 'A polished fantasy potion icon, transparent background.',
);
OpenAI gpt-image-2 Reference Images
final edited = await openai.imageRequest()
.prompt('把这张草图转成精致的游戏图标')
.referenceUrl('https://example.com/sketch.png')
.referenceFileId('file-123')
.generate();
You can also attach base64 reference images:
final edited = await openai.imageRequest()
.prompt('把这张草图转成精致的游戏图标')
.referenceBase64(
mimeType: YlsMimeTypes.png,
dataBase64: imageBase64,
)
.generate();
Gemini Image Editing
final edited = await gemini.imageRequest()
.prompt('把它改成赛博朋克风格')
.referenceImage(
mimeType: YlsMimeTypes.png,
dataBase64: imageBase64,
)
.generate();
Defaults and Presets
The high-level layer includes:
YlsDefaultModels.openaiChatYlsDefaultModels.openaiImageOuterYlsDefaultModels.openaiImageModelYlsDefaultModels.geminiChatYlsDefaultModels.claudeChatYlsDefaultModels.geminiImageYlsOptionPresets.precise()YlsOptionPresets.balanced()YlsOptionPresets.creative()YlsOptionPresets.image()
Low-Level API
If you need full control, the generated FRB bindings are also exported, including:
ClientConfigChatRequestChatMessageMessagePartYlsAgiClientchat(...)chatStream(...)generateGeminiImage(...)