llm_ollama 0.1.7
llm_ollama: ^0.1.7 copied to clipboard
Ollama backend implementation for LLM interactions. Provides streaming chat, embeddings, tool calling, vision support, and model management.
Changelog #
All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
[Unreleased] #
0.1.7 - 2026-02-10 #
Added #
batchEmbed()implementation: delegates to existing batch-capableembed()(Ollama/api/embedaccepts an array of inputs).
0.1.6 - 2026-02-10 #
Fixed #
- Ensured Ollama tool calls always produce
LLMToolCallinstances with non-null, non-emptyidvalues, synthesizing IDs when Ollama does not provide them. - Aligned tool-calling behavior with
llm_core'stoolCallIdvalidation so that tool execution no longer fails withTool message must have toolCallIdwhen used together withllm_core.
0.1.5 - 2026-01-26 #
Added #
- Builder pattern for
OllamaChatRepositoryviaOllamaChatRepositoryBuilderfor complex configurations - Support for
StreamChatOptionsinstreamChat()method - Support for
chatResponse()method for non-streaming complete responses - Support for
RetryConfigandTimeoutConfigfor advanced request configuration - Input validation for model names and messages
Changed #
streamChat()now accepts optionalStreamChatOptionsparameter- Improved error handling and retry logic
- Enhanced documentation