NocLLM_Dart 🚀
[Also available in Indonesian / Tersedia dalam Bahasa Indonesia: README_id.md]
A Dart & Flutter version of the NocLLM library, developed by the Nocturnailed Community. This package brings lightweight, asynchronous Large Language Model (LLM) interaction capabilities directly to Dart, supporting both Cloud and Local APIs with SSE streaming.
🌟 Overview
noc_llm_dart is designed strictly as a port of our lightweight LLM engine geared initially toward microcontrollers, translated idiomatically to Dart for pure cross-platform development (Web, iOS, Android, Desktop).
Features:
- Asynchronous Streaming: Real-time token-by-token responses via Dart
Stream. - Dynamic Connection: HTTPS for cloud APIs and bare HTTP for local models (zero SSL overhead).
- Native Gemini Switcher: Path-based auto-detection for Google GenAI specific headers and JSON structure.
- Multi-Turn Conversations: Context-aware chat with built-in history management.
- Zero Dependencies: Focused and lightweight implementation using
package:http.
📦 Installation
Add the following to your pubspec.yaml file:
dependencies:
noc_llm_dart: ^1.0.0
Then run dart pub get or flutter pub get.
🚀 Quick Start
Here are a few examples showcasing noc_llm_dart, with a focus on Sumopod Cloud.
1. Simple Streaming with Sumopod Cloud
import 'dart:io';
import 'package:noc_llm_dart/noc_llm_dart.dart';
void main() async {
// Sumopod Cloud Configuration (OpenAI Compatible)
final ai = NocAI(
apiKey: 'sk-YOUR_SUMOPOD_API_KEY',
baseUrl: 'https://ai.sumopod.com/v1',
model: 'deepseek-v3',
);
print('🚀 Asking Sumopod Cloud...');
// SSE Streaming: token arrives as they are generated
await for (final chunk in ai.stream('Explain why the sky is blue.')) {
stdout.write(chunk);
}
ai.dispose();
}
2. Multi-turn Chat (Contextual)
import 'package:noc_llm_dart/noc_llm_dart.dart';
void main() async {
final ai = NocAI(
apiKey: 'YOUR_API_KEY',
baseUrl: 'https://api.openai.com/v1',
model: 'gpt-3.5-turbo',
);
// First question
await ai.chat('What is the capital of Japan?');
// Second question (automatically includes previous history)
String response = await ai.chat('What is its popular food?');
print(response); // "Japan's capital is Tokyo... and it is famous for Sushi..."
ai.dispose();
}
3. Local LLM (LM Studio / Ollama)
import 'package:noc_llm_dart/noc_llm_dart.dart';
void main() async {
// Zero SSL overhead using plain HTTP for local network speed
final ai = NocAI(
apiKey: '',
baseUrl: 'http://localhost:1234/v1',
model: 'local-model',
);
final response = await ai.chat('Hello local AI!');
print(response);
ai.dispose();
}
📚 Supported Providers & Features
Cloud Providers
- Sumopod Cloud: Optimized integration for Deepseek and GPT models via
ai.sumopod.com. - Gemini Native: Specialized handling for
generativelanguage.googleapis.com(detection is automatic). - OpenAI / Groq: Standard
Bearertoken and SSE format support.
Local Providers
- LM Studio: Seamless connection to
localhost:1234. - Ollama: Compatible with Ollama's OpenAI API bridge.
Features
- SSE Parser: Automated extraction of content from
data:chunks. - Config Auto-Detection: Detects provider capability based on URL patterns.
- Connection Handlers: Typed exceptions for Auth, Rate-limit, and Connection issues.
🤝 Contributing
Contributions, issues, and feature requests are welcome! Feel free to check issues page.
📝 License
This project is MIT licensed.
Libraries
- noc_llm_dart
- NocLLM Dart - Lightweight, asynchronous Dart & Flutter library for LLM interaction.