llm_core 0.1.7
llm_core: ^0.1.7 copied to clipboard
Core abstractions for LLM (Large Language Model) interactions. Provides common interfaces, models, and utilities used by LLM backend implementations.
Changelog #
All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
[Unreleased] #
0.1.7 - 2026-02-10 #
Added #
batchEmbed()onLLMChatRepository: explicit API for embedding multiple texts in one call. Same signature asembed(); default implementation delegates toembed(). Documented for Ollama, OpenAI, and llama.cpp backends.
0.1.6 - 2026-02-10 #
Fixed #
- Hardened
StreamToolExecutorto always synthesize a non-emptytoolCallIdforLLMRole.toolmessages when a backend-providedLLMToolCall.idis missing or empty, preventingTool message must have toolCallIdvalidation errors. - Improved tool execution error handling so that thrown tool exceptions are surfaced as tool messages rather than crashing the stream.
0.1.5 - 2026-01-26 #
Added #
StreamChatOptionsclass to encapsulate all streaming chat options and reduce parameter proliferationRetryConfigandRetryUtilfor configurable retry logic with exponential backoffTimeoutConfigfor flexible timeout configuration (connection, read, total, large payloads)LLMMetricsinterface andDefaultLLMMetricsimplementation for optional metrics collectionchatResponse()method onLLMChatRepositoryfor non-streaming complete responses- Input validation utilities in
Validationclass ChatRepositoryBuilderBasefor implementing builder patterns in repository implementationsStreamChatOptionsMergerfor merging options from multiple sources- HTTP client utilities (
HttpClientHelper) for consistent request handling - Error handling utilities (
ErrorHandlers,BackendErrorHandler) for standardized error processing - Tool execution utilities (
ToolExecutor) for managing tool calling workflows
Changed #
streamChat()now accepts optionalStreamChatOptionsparameter- Improved error handling and retry logic across all backends
- Enhanced documentation
0.1.0 - 2026-01-19 #
Added #
- Initial release
- Core abstractions for LLM interactions:
LLMChatRepository- Abstract interface for chat completionsLLMMessage- Message representation with roles and contentLLMResponse- Response wrapper with metadataLLMChunk- Streaming response chunksLLMEmbedding- Text embedding representation
- Tool calling support:
LLMTool- Tool definition with JSON Schema parametersLLMToolCall- Tool invocation representationLLMToolParam- Parameter definitions
- Exception types for error handling