gnrllybttr_ollama_client 1.0.0 copy "gnrllybttr_ollama_client: ^1.0.0" to clipboard
gnrllybttr_ollama_client: ^1.0.0 copied to clipboard

A Dart client for interacting with the Ollama API. This package provides methods to interact with the Ollama API, including chat, model management, embeddings, and more.

GnrllyBttrOllamaClient 🦙✨ #

Developed with ❤️ by

GnrllyBttr Logo

GitHub Stars License Coverage

Package Platform Likes Points


📑 Table of Contents #

🌟 Features #

A Dart client for interacting with the Ollama API. This package provides methods to interact with the Ollama API, including chat, model management, embeddings, and more.

GnrllyBttrOllamaClient offers an exceptional developer experience with:

  • Type Safety: Fully type-safe API with comprehensive class definitions and null safety support
  • Immutable Models: Thread-safe, immutable data models using Freezed for reliable state management
  • Request/Response Consistency: Structured request and response objects for all API operations
  • Error Handling: Well-defined exception types and error messages for better debugging
  • Streaming Support: First-class support for streaming responses with strong typing
  • Documentation: Extensive documentation with examples for all features
  • JSON Serialization: Automatic JSON serialization/deserialization for all models
  • Fluent API: Intuitive, chainable methods for building requests
  • Cancellation Support: Built-in request cancellation via cancel tokens
  • Cross-Platform: Works seamlessly across all Dart/Flutter supported platforms
  • Functionalities:
    • Chat: Send chat requests and receive responses.
    • Model Management: Create, copy, delete, and list models.
    • Embeddings: Generate embeddings for given inputs.
    • Text Generation: Generate text using specified models.
    • Streaming: Support for streaming chat and text generation.
    • Versioning: Retrieve the version of the Ollama API.

🚀 Installation #

Add the following to your pubspec.yaml file:

dependencies:
  gnrllybttr_ollama_client: ^1.0.0
copied to clipboard

Then run:

flutter pub get
copied to clipboard

📚 Examples #

Here are some examples of how to use GnrllyBttrOllamaClient:

  • Chat: Send a chat request and receive a response.
  • Model Management: Create, copy, delete, and list models.
  • Embeddings: Generate embeddings for given inputs.
  • Text Generation: Generate text using specified models.
  • Streaming: Stream chat and text generation responses.

🏁 Getting Started #

To get started with GnrllyBttrOllamaClient, follow these steps:

  1. Create a Client: Initialize the client.
  2. Send Requests: Use the client to send chat requests, manage models, generate embeddings, and more.

📖 Usage #

Creating a Client #

import 'package:gnrllybttr_ollama_client/gnrllybttr_ollama_client.dart';

final client = GnrllyBttrOllamaClient();
copied to clipboard

💬 Chat #

Sending a Chat Request

final response = await client.chat(
  request: ChatRequest(
    model: 'llama2',
    messages: <ChatMessage>[
      ChatMessage(
        role: ChatMessageRole.system,
        content: 'You are a knowledgeable AI assistant specializing in Flutter.',
      ),
      ChatMessage(
        role: ChatMessageRole.user,
        content: 'What is the best state management solution for Flutter apps?',
      ),
    ],
  ),
  cancelToken: cancelToken,
);

print(response.message?.content);
copied to clipboard

Streaming Chat Responses

final stream = client.chatStream(
  request: ChatRequest(
    model: 'codellama',
    messages: <ChatMessage>[
      ChatMessage(
        role: ChatMessageRole.system,
        content: 'You are a Dart and Flutter expert providing concise code reviews.',
      ),
      ChatMessage(
        role: ChatMessageRole.user,
        content: '''
          Can you review this Flutter code for efficiency?
          Widget build(BuildContext context) {
            return ListView.builder(
              itemCount: items.length,
              itemBuilder: (context, index) {
                return ListTile(
                  title: Text(items[index].title),
                );
              },
            );
          }
        ''',
      ),
    ],
  ),
  cancelToken: cancelToken,
);

await for (final response in stream) {
  print(response.message?.content);
}
copied to clipboard

🛠️ Model Management #

Copying a Model

await client.copyModel(
  request: CopyModelRequest(
    sourceModel: 'llama2:13b',
    destinationModel: 'llama2-custom-fine-tuned',
  ),
  cancelToken: cancelToken,
);
copied to clipboard

Creating a Model

final stream = client.createModelStream(
  request: CreateModelRequest(
    model: 'custom-mistral-7b',
    from: 'mistral:7b',
    files: <String, String>{
      'training_data.jsonl': 'sha256:8f4e56db7fce1a36f01c3235a82c6e89d52f5c7c8642e8ea4f1f5f0a6864a930',
    },
    adapters: <String, String>{
      'lora-adapter': 'sha256:2d5c7c8642e8ea4f1f5f0a6864a930f4e56db7fce1a36f01c3235a82c6e89d52',
    },
    template: '{{ .System }}\n\nUser: {{ .Prompt }}\nAssistant: ',
    license: 'MIT',
    system: 'You are a helpful AI assistant specialized in Flutter development.',
    parameters: <String, dynamic>{
      'temperature': 0.7,
      'top_p': 0.9,
      'repeat_penalty': 1.1,
    },
    messages: <ChatMessage>[
      ChatMessage(
        role: ChatMessageRole.user,
        content: 'Hello!',
      ),
    ],
    stream: false,
    quantize: 'q4_0',
  ),
  cancelToken: cancelToken,
);

await for (final chunk in stream) {
  print(chunk);
}
copied to clipboard

Deleting a Model

await client.deleteModel(
  request: DeleteModelRequest(
    model: 'llama2-custom-fine-tuned',
  ),
  cancelToken: cancelToken,
);
copied to clipboard

Listing Models

final response = await client.listModels(
  cancelToken: cancelToken,
);

print(response);
copied to clipboard

Listing Running Models

final response = await client.listRunningModels(
  cancelToken: cancelToken,
);

print(response);
copied to clipboard

Showing Model Information

final modelInfo = await client.showModelInformation(
  modelName: 'gpt-3',
  cancelToken: cancelToken,
);

print('Model Info: ${modelInfo.modelfile}');
copied to clipboard

🧠 Embeddings #

final response = await client.embeddings(
  request: EmbedRequest(
    model: 'gpt-3',
    input: 'Flutter is a UI toolkit for building beautiful, natively compiled applications for mobile, web, and desktop from a single codebase.',
  ),
  cancelToken: cancelToken,
);

print(response);
copied to clipboard

✍️ Text Generation #

Non-Streaming

final response = await client.generate(
  request: GenerateRequest(
    model: 'llama2',
    prompt: '''
      Write a Flutter widget that displays a customizable loading indicator with the following requirements:
      - Accepts a color parameter
      - Shows a circular progress indicator
      - Has a custom size
      - Displays optional loading text below
    ''',
    temperature: 0.7,
    maxTokens: 500,
  ),
  cancelToken: cancelToken,
);

print(response.response);
copied to clipboard

Streaming

final stream = client.generateStream(
  request: GenerateRequest(
    model: 'llama2',
    prompt: '''
      Write a Flutter widget that displays a customizable loading indicator with the following requirements:
      - Accepts a color parameter
      - Shows a circular progress indicator
      - Has a custom size
      - Displays optional loading text below
    ''',
    temperature: 0.7,
    maxTokens: 500,
  ),
  cancelToken: cancelToken,
);

await for (final chunk in stream) {
  print(chunk);
}
copied to clipboard

📦 Blob Operations #

Checking if a Blob Exists

final exists = await client.checkBlobExists(
  digest: 'blobDigest',
  cancelToken: cancelToken,
);

print('Blob exists: $exists');
copied to clipboard

Pushing a Blob

final blobDigest = 'sha256:3f4e56db7fce1a36f01c3235a82c6e89d52f5c7c8642e8ea4f1f5f0a6864a930';
final blobData = await File('model_weights.bin').readAsBytes();

await client.pushBlob(
  digest: blobDigest,
  data: blobData,
  cancelToken: cancelToken,
);

print('Blob pushed successfully');
copied to clipboard

🔄 Model Transfer #

Pulling a Model

final stream = client.pullModelStream(
  request: PullModelRequest(
    model: 'mistral:7b-instruct-v0.2',
    insecure: false,
  ),
  cancelToken: cancelToken,
);

await for (final chunk in stream) {
  print(chunk);
}
copied to clipboard

Pushing a Model

final stream = client.pushModelStream(
  request: PushModelRequest(
    model: 'mistral-7b-finetuned',
    insecure: false,
    stream: true,
  ),
  cancelToken: cancelToken,
);

await for (final chunk in stream) {
  print('Upload progress: ${chunk.status}');
}
copied to clipboard

📅 Retrieving API Version #

final response = await client.getVersion(
  cancelToken: cancelToken,
);

print(response);
copied to clipboard

🛑 Using Cancel Tokens #

Cancel tokens allow you to cancel ongoing HTTP requests. This is particularly useful for long-running requests or when the user navigates away from the current screen, ensuring that resources are not wasted on unnecessary network operations.

import 'package:gnrllybttr_ollama_client/gnrllybttr_ollama_client.dart';

final client = GnrllyBttrOllamaClient();
final cancelToken = HttpCancelToken();

// Start a long-running request
final stream = client.chatStream(
  request: ChatRequest(
    model: 'llama2',
    messages: <ChatMessage>[
      ChatMessage(
        role: ChatMessageRole.system,
        content: 'You are a knowledgeable AI assistant.',
      ),
      ChatMessage(
        role: ChatMessageRole.user,
        content: 'Explain everything there is to know about Flutter.',
      ),
    ],
  ),
  cancelToken: cancelToken,
);

// Cancel the request if needed
cancelToken.cancel();
copied to clipboard

🤝 Contributing #

We welcome contributions to GnrllyBttrOllamaClient! Please see our contributing guidelines for more information on how to get started.

🆘 Support #

If you need help or have any questions, please feel free to open an issue on GitHub.

📝 Changelog #

See the changelog for a history of changes and updates.

📄 License #

GnrllyBttrOllamaClient is released under the MIT License. See the LICENSE file for details.

4
likes
130
points
61
downloads

Publisher

verified publishergnrllybttr.dev

Weekly Downloads

2024.09.23 - 2025.04.07

A Dart client for interacting with the Ollama API. This package provides methods to interact with the Ollama API, including chat, model management, embeddings, and more.

Repository (GitHub)
View/report issues
Contributing

Topics

#nlp #llm #ollama

Documentation

Documentation
API reference

License

MIT (license)

Dependencies

dio, freezed_annotation, json_annotation

More

Packages that depend on gnrllybttr_ollama_client