titan_basalt 1.12.6
titan_basalt: ^1.12.6 copied to clipboard
Basalt — Titan's infrastructure & resilience toolkit. Reactive caching, rate limiting, circuit breakers, retry queues, and priority task processing.
Changelog #
1.12.6 #
Changed #
- ReadCore pattern — All public reactive getters across 11 infrastructure classes now return
ReadCore<T>instead ofCore<T>, enforcing read-only external access:Embargo:activeCount,queueLength,totalAcquiresArbiter:conflictCount,lastResolution,totalResolvedTithe:consumed,breakdownLode:available,inUse,size,waitersCensus:count,sum,min,max,lastWarden:isChecking,totalChecks, per-servicestatus(),latency(),failures(),lastChecked()Banner:operator[]Lattice:status,completedCountClarion: per-job and scheduler-level state gettersTapestry: per-weave and store-level state gettersSluice: per-stage and pipeline-level state getters
1.12.5 #
Performance #
- Annals — Replaced
List<AnnalEntry>withQueue<AnnalEntry>andremoveAt(0)withremoveFirst()for O(1) eviction at capacity (~60× faster recording throughput at max entries). - Tether — Made
call()non-async (returns Future directly with.onError()for error tracking), eliminating async state machine overhead. LazyDateTime.now()via dirty flag avoids ~150ns syscall per call. Replaced reactive_callCount/_lastCallTimeTitanState nodes with plain counters (managed nodes: 4 → 2). - Volley — Replaced reactive
_successCount/_failedCountTitanState nodes with plainintcounters. Pre-computed fast-path eligibility flag. Inlined no-retry/no-timeout worker path to skip_executeWithRetryindirection (managed nodes: 6 → 4).
1.12.4 #
Fixed #
- BannerFlag: Removed
assertfor rollout range validation in const constructor. Rollout is now validated exclusively viaArgumentErrorinBanner()constructor andBanner.register(), ensuring the check fires in release builds too.
1.12.3 #
Changed #
- Assert → Runtime Errors: All debug-only
assertstatements converted to runtime errors that fire in release builds:Embargo:ArgumentErrorfor non-positive permitsTrove:ArgumentErrorfor non-positive maxEntriesVolley:ArgumentErrorfor non-positive concurrency/maxRetriesCensus:ArgumentErrorfor non-positive maxEntries and invalid percentile rangeMoat:ArgumentErrorfor non-positive maxTokens and invalid consume amountsPyre:ArgumentErrorfor non-positive concurrency/maxQueueSize/maxRetriesArbiter:ArgumentErrorfor null merge callback on custom strategy,StateErrorfor use-after-disposeTithe:ArgumentErrorfor non-positive budget/amount and invalid percentages,StateErrorfor use-after-disposeLode:ArgumentErrorfor non-positive maxSize,StateErrorfor use-after-dispose and released lease accessWarden:ArgumentErrorfor empty services listClarion:StateErrorfor duplicate job registrationTapestry:StateErrorfor duplicate weave registrationSluice:ArgumentErrorfor empty stages and non-positive stage concurrencyBanner:ArgumentErrorfor rollout values outside 0.0–1.0 (validated atBannerregistration time)
1.12.2 #
Fixed #
- Dependency: Updated minimum
titanconstraint to^1.1.0(requiresPillar.registerNodes()API). - Example: Added
example/example.dartfor pub.dev documentation score.
1.12.1 #
Improved #
- Pyre — O(1) enqueue/dequeue via per-priority
Queuebuckets (was O(n) sorted list insert + removeAt(0)). - Volley — Added
maxRetries,retryDelay,taskTimeout, per-taskVolleyTask.timeout,onTaskComplete/onTaskFailedcallbacks, separatesuccessCount/failedCounttracking,isDisposedguard. - Tether — Rewritten as instance-based with reactive state (
registeredCount,callCount,lastCallTime,errorCount),managedNodes,dispose(). AddedTether.globalsingleton with static convenience API. Addedtether()Pillar factory. - Annals — Fixed
StreamControllerleak (lazy creation +dispose()). Optimizedquery()with limit to avoid materializing full list. Changed backing store fromQueuetoList. - Saga — Compensation errors now tracked in
compensationErrorslist instead of silently swallowed. AddedonCompensationErrorcallback. - Moat —
consume()now uses Completer-based wakeup instead of polling loop, eliminating unnecessary CPU cycles. - Trove —
_purgeExpired()optimized to single-pass (avoids redundant map re-lookup).
Deprecated #
- Bulwark — Deprecated in favor of
Portcullis(superset). Will be removed in v2.0.
1.12.0 #
Added #
- Tapestry — Reactive event store with CQRS projections. Append-only event log with
TapestryStrandenvelopes (sequence, timestamp, correlationId, metadata), reactiveTapestryWeaveprojections with fold functions and optionalwherefilters, temporal event querying,TapestryFramesnapshots, replay, compaction, andmaxEventslimit. Aggregate reactive state (eventCount, lastSequence, status, lastEventTime, weaveCount). Per-weave state (state, version, lastUpdated).
1.11.0 #
Added #
- Clarion — Reactive job scheduler. Manages recurring and one-shot async jobs with configurable intervals, concurrency policies (
skipIfRunning,allowOverlap), per-job reactive observability (isRunning,runCount,errorCount,lastRun,nextRun), aggregate reactive state (status,activeCount,totalRuns,totalErrors,successRate,isIdle,jobCount), pause/resume per-job or globally, manualtrigger(), andClarionRunexecution records.
1.10.0 #
Added #
- Sluice — Reactive multi-stage data pipeline. Processes items through configurable stages with per-stage metrics (processed, filtered, errors, queued), retry with configurable attempts, per-stage timeout, overflow strategies (backpressure, dropOldest, dropNewest), pause/resume control, and aggregate reactive state (fed, completed, failed, inFlight, status, errorRate).
1.9.0 #
Added #
- Tithe — Reactive quota & budget manager. Tracks cumulative resource consumption against configurable budgets with reactive signals (consumed, remaining, exceeded, ratio), per-key breakdown, threshold alerts at configurable percentages, auto-reset with periodic timer, and tryConsume for safe budget checks.
1.8.0 #
Added #
- Lode — Reactive resource pool for managing bounded pools of reusable expensive resources (database connections, HTTP clients, worker isolates). Acquire/release with LodeLease, withResource convenience, health validation on checkout, warmup/drain lifecycle, timeout on exhausted pool, and reactive metrics (available, inUse, size, waiters, utilization).
1.7.0 #
Added #
- Arbiter — Reactive conflict resolution with pluggable strategies (lastWriteWins, firstWriteWins, merge, manual). Submit values from multiple sources, detect conflicts reactively, auto-resolve or manually accept, with full resolution history and reactive state tracking.
1.6.0 #
Added #
- Warden — Reactive service health monitor with continuous polling, per-service reactive state (status, latency, failures, lastChecked), aggregate health for critical services, configurable down thresholds, per-service interval overrides, and manual
checkService()/checkAll()methods.
1.5.0 #
Added #
- Census — Reactive sliding-window data aggregation with count, sum, average, min, max, percentile. Auto-records from reactive sources or accepts manual
record()calls. Incremental O(1) updates on the hot path, configurablemaxEntriesbuffer cap.
1.4.0 #
Added #
- Embargo — Reactive async mutex/semaphore with configurable permits, FIFO queuing, timeout support, automatic release on error, and reactive status/queue tracking. Mutex mode (permits=1) for double-submit prevention, semaphore mode (permits=N) for connection pooling.
1.3.0 #
Added #
- Lattice — Reactive DAG (directed acyclic graph) task executor with dependency resolution, automatic parallelism via Kahn's algorithm, fail-fast error handling, reactive progress/status tracking, and upstream result passing.
1.2.0 #
Added #
- Sieve — Reactive search, filter & sort engine for collections. Text search, named predicate filters (AND logic), sorting, and reactive outputs — all Pillar-managed.
1.1.0 #
Added #
- Banner — Reactive feature flag registry with percentage-based rollout, context-aware targeting rules, developer overrides, expiration, and remote config integration. Each flag is a reactive
Core<bool>that triggers UI rebuilds when updated.
1.0.0 #
Initial Release #
Infrastructure & resilience features extracted from titan core:
- Trove — Reactive TTL/LRU in-memory cache with hit-rate tracking
- Moat — Token-bucket rate limiter with per-key quotas (MoatPool)
- Portcullis — Reactive circuit breaker with half-open probing
- Anvil — Dead letter & retry queue with configurable backoff
- Pyre — Priority-ordered async task queue with concurrency control
- Codex — Reactive paginated data loading (offset & cursor-based)
- Quarry — SWR data queries with dedup, retry, and optimistic updates
- Bulwark — Lightweight circuit breaker with reactive state
- Saga — Multi-step workflow orchestration with compensation/rollback
- Volley — Parallel batch async execution with progress tracking
- Tether — Composable middleware-style action chain
- Annals — Capped, queryable append-only audit log
All features integrate with Pillar via extension methods — use
late final cache = trove(...) just like core factory methods.
