perplexity_dart 1.0.5
perplexity_dart: ^1.0.5 copied to clipboard
A lightweight Dart SDK to interact with Perplexity.ai's chat completion API (streaming + model switching supported).
perplexity_dart #
Perplexity Dart SDK is a lightweight and type-safe Dart client for interacting with Perplexity.ai's chat/completions
API.
It supports both streaming and non-streaming responses, flexible model switching (e.g., sonar
, sonar-pro
, etc.), and is designed to work with Flutter apps.
This package also support perplexity_flutter
Features #
- Streamed and full chat completion support
- Switch between models with known context lengths
- Chat roles:
system
,user
,assistant
,tool
- Image input: send images as base64 or URL directly alongside text
Getting Started #
Add the SDK to your project:
dependencies:
perplexity_dart:
Direct API Usage #
For more control, you can use the PerplexityClient
directly:
import 'package:perplexity_dart/perplexity_dart.dart';
void main() async {
final client = PerplexityClient(
apiKey: 'your-api-key',
);
// Create messages
final messages = [
StandardMessageModel(
role: MessageRole.system,
content: 'Be precise and concise.',
),
StandardMessageModel(
role: MessageRole.user,
content: 'Hello, how are you?',
),
];
// Non-streaming response
final requestModel = ChatRequestModel(
model: PerplexityModel.sonarPro,
messages: messages,
stream: false,
);
final response = await client.sendMessage(requestModel: requestModel);
print(response.content);
// Streaming response
final streamRequestModel = ChatRequestModel(
model: PerplexityModel.sonar,
messages: messages,
stream: true,
);
final stream = client.streamChat(requestModel: streamRequestModel);
await for (final chunk in stream) {
print(chunk);
}
import 'dart:convert';
import 'dart:io';
// Send Image from Local in base64.
final bytes = File('/path/to/photo.png').readAsBytesSync();
final base64Image = base64Encode(bytes);
final dataUri = 'data:image/png;base64,$base64Image';
final req = ChatRequestModel.defaultImageRequest(
url: dataUri,
systemPrompt: 'Describe this image.',
imagePrompt: 'What’s happening here?',
);
// Send Image as Image URL
final request = ChatRequestModel.defaultImageRequest(
url: ['https://example.com/photo.png, data:image/png;base64,iVBORw0KGgoAAAANSUhEUgA...'],
systemPrompt: 'You are an expert image analyst.',
imagePrompt: 'Describe what you see here.',
stream: false,
model: PerplexityModel.sonarPro,
);
final response = await client.sendMessage(requestModel: request);
print(response.content);
}
Available Models #
The SDK supports all current Perplexity models with their context lengths:
PerplexityModel.sonar
- 128K tokensPerplexityModel.sonarPro
- 200K tokensPerplexityModel.sonarDeepResearch
- 128K tokensPerplexityModel.sonarReasoning
- 128K tokensPerplexityModel.sonarReasoningPro
- 128K tokens
🔧 Advanced Configuration #
The ChatRequestModel
supports various parameters for fine-tuning your requests:
final requestModel = ChatRequestModel(
model: PerplexityModel.sonar,
messages: messages,
stream: true,
maxTokens: 1000,
temperature: 0.7,
topP: 0.9,
searchDomainFilter: ['example.com'],
returnImages: false,
returnRelatedQuestions: true,
searchRecencyFilter: 'day',
topK: 3,
presencePenalty: 0.0,
frequencyPenalty: 0.0,
);
License #
This project is licensed under the MIT License - see the LICENSE file for details.