llm_core 0.1.9 copy "llm_core: ^0.1.9" to clipboard
llm_core: ^0.1.9 copied to clipboard

Core abstractions for LLM (Large Language Model) interactions. Provides common interfaces, models, and utilities used by LLM backend implementations.

Changelog #

All notable changes to this project will be documented in this file.

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

[Unreleased] #

0.1.9 - 2026-02-28 #

Changed #

  • Breaking: Removed requireFinalAssistantResponse option from StreamChatOptions, StreamChatOptionsMerger, and StreamToolExecutor. Tool loops now always require a final assistant response — there is no reason to allow tool loops to end without the assistant reporting back.
  • maxToolAttempts default increased from 25 to 90 across all repositories and builders.
  • chatResponse() tool loop detection refined: only actual tool result chunks (LLMRole.tool) trigger the incomplete-loop check, not tool calls appearing alongside content.

0.1.8 - 2026-02-26 #

Added #

  • Tool result chunks emitted to stream: StreamToolExecutor now yields LLMChunk with role: LLMRole.tool after each tool execution, so chat consumers can display "Tool X returned: Y" per OpenAI function calling specs
  • LLMToolCall.toApiFormat() helper for converting to OpenAI/Ollama API format
  • Assistant message with tool_calls added to message history before tool results (API-compliant sequence)
  • Content accumulation from stream chunks for assistant messages that include both text and tool calls

Changed #

  • Breaking: Removed toolName from LLMMessage and LLMChunkMessage; use toolCallId only (OpenAI canonical format)
  • Breaking: Tool message validation now requires toolCallId (removed toolName option)
  • StreamToolExecutor accumulates content and thinking from chunks for the assistant message

0.1.7 - 2026-02-10 #

Added #

  • batchEmbed() on LLMChatRepository: explicit API for embedding multiple texts in one call. Same signature as embed(); default implementation delegates to embed(). Documented for Ollama, OpenAI, and llama.cpp backends.

0.1.6 - 2026-02-10 #

Fixed #

  • Hardened StreamToolExecutor to always synthesize a non-empty toolCallId for LLMRole.tool messages when a backend-provided LLMToolCall.id is missing or empty, preventing Tool message must have toolCallId validation errors.
  • Improved tool execution error handling so that thrown tool exceptions are surfaced as tool messages rather than crashing the stream.

0.1.5 - 2026-01-26 #

Added #

  • StreamChatOptions class to encapsulate all streaming chat options and reduce parameter proliferation
  • RetryConfig and RetryUtil for configurable retry logic with exponential backoff
  • TimeoutConfig for flexible timeout configuration (connection, read, total, large payloads)
  • LLMMetrics interface and DefaultLLMMetrics implementation for optional metrics collection
  • chatResponse() method on LLMChatRepository for non-streaming complete responses
  • Input validation utilities in Validation class
  • ChatRepositoryBuilderBase for implementing builder patterns in repository implementations
  • StreamChatOptionsMerger for merging options from multiple sources
  • HTTP client utilities (HttpClientHelper) for consistent request handling
  • Error handling utilities (ErrorHandlers, BackendErrorHandler) for standardized error processing
  • Tool execution utilities (ToolExecutor) for managing tool calling workflows

Changed #

  • streamChat() now accepts optional StreamChatOptions parameter
  • Improved error handling and retry logic across all backends
  • Enhanced documentation

0.1.0 - 2026-01-19 #

Added #

  • Initial release
  • Core abstractions for LLM interactions:
    • LLMChatRepository - Abstract interface for chat completions
    • LLMMessage - Message representation with roles and content
    • LLMResponse - Response wrapper with metadata
    • LLMChunk - Streaming response chunks
    • LLMEmbedding - Text embedding representation
  • Tool calling support:
    • LLMTool - Tool definition with JSON Schema parameters
    • LLMToolCall - Tool invocation representation
    • LLMToolParam - Parameter definitions
  • Exception types for error handling
0
likes
150
points
468
downloads

Publisher

unverified uploader

Weekly Downloads

Core abstractions for LLM (Large Language Model) interactions. Provides common interfaces, models, and utilities used by LLM backend implementations.

Repository (GitHub)
View/report issues
Contributing

Topics

#llm #ai #chat #embeddings #tools

Documentation

API reference

License

MIT (license)

Dependencies

http, logging

More

Packages that depend on llm_core