davianspace_http_resilience 1.0.3 copy "davianspace_http_resilience: ^1.0.3" to clipboard
davianspace_http_resilience: ^1.0.3 copied to clipboard

Production-grade HTTP resilience for Dart & Flutter. Retry, circuit breaker, timeout, bulkhead, hedging, fallback, structured observability, and header-redacted logging.

davianspace_http_resilience #

pub version Dart SDK License: MIT Tests Coverage

A production-grade Dart / Flutter HTTP resilience library inspired by Microsoft.Extensions.Http.Resilience and Polly.

Built for enterprise workloads: composable middleware pipelines, seven resilience policies, structured observability, configuration-driven setup, deterministic resource lifecycle, and header-redacted security logging — all with zero reflection and strict null-safety.


Table of Contents #


Why This Package? #

Concern How We Address It
Transient failures Retry with constant, linear, exponential, and jittered back-off
Cascading failures Circuit breaker with Closed → Open → Half-Open state machine
Tail latency Hedging fires speculative requests after a configurable delay
Degraded service Fallback returns a cached / synthetic response on failure
Slow endpoints Per-attempt and total-operation timeout deadlines
Overload protection Bulkhead + bulkhead-isolation with queue-depth limits
Security Automatic header redaction in logs (auth, cookies, API keys)
Ops visibility Event hub, circuit health checks, live semaphore metrics
Config flexibility JSON-driven policy configuration with runtime reload
Resource safety dispose() on every policy, handler, and client

Features at a Glance #

Feature Description
Composable middleware pipeline Chain any number of handlers in a type-safe, ordered pipeline
Retry policy Constant, linear, exponential, and decorrelated-jitter back-off with composable predicates
Circuit breaker Closed / Open / Half-Open state machine with shared circuit registries
Timeout policy Per-attempt or total-operation deadlines
Bulkhead (concurrency limiter) Bounded max-parallel + queue-depth with back-pressure
Bulkhead isolation Semaphore-based isolation with completer-signalling and live metrics
Hedging Speculative execution to reduce tail latency for idempotent operations
Fallback Status-code or exception predicate with async fallback action
Per-request streaming Override the pipeline streaming mode per-request via metadata
Configuration-driven policies ResilienceConfigLoader + JSON sources bind policies at runtime
Fluent builder DSL FluentHttpClientBuilder for expressive, step-by-step client construction
Structured logging Header-redacted structured logging via davianspace_logging
Typed HTTP client Ergonomic get / post / put / patch / delete / head / options verbs
Named client registry HttpClientFactory for shared, lifecycle-managed clients
Cancellation support CancellationToken for cooperative cancellation across the pipeline
Response extensions ensureSuccess(), bodyAsString, bodyAsJsonMap, bodyAsJsonList
Retry predicate DSL Composable RetryPredicates with .or() / .and() combinators
Transport-agnostic policy engine Policy.wrap for any async operation (DB, gRPC, file I/O)
Event hub ResilienceEventHub broadcasts retry, circuit, timeout, fallback, and bulkhead events
Health checks CircuitBreakerRegistry snapshots for readiness/liveness probes
Deterministic disposal dispose() on policies, handlers, and clients for leak-free operation

Requirements #

Requirement Version
Dart SDK >=3.0.0 <4.0.0
Flutter Any (optional — works as pure Dart)

Runtime dependencies (all from pub.dev):

Package Version
http ^1.2.1
davianspace_dependencyinjection ^1.0.3
davianspace_logging ^1.0.3

Installation #

dependencies:
  davianspace_http_resilience: ^1.0.3
dart pub get

Companion Packages #

Package Purpose
davianspace_http_ratelimit Adds withRateLimit() to HttpClientBuilder; six rate-limiting algorithms + server-side admission control
davianspace_dependencyinjection Adds addHttpClientFactory() and addTypedHttpClient<T>() to ServiceCollection
davianspace_logging Structured logging framework; inject a Logger into LoggingHandler via addLogging() on ServiceCollection

Add both packages to use rate limiting:

dependencies:
  davianspace_http_resilience: ^1.0.3
  davianspace_http_ratelimit:  ^1.0.3

Quick Start #

import 'package:davianspace_http_resilience/davianspace_http_resilience.dart';

void main() async {
  final client = HttpClientFactory.create('my-api')
      .withBaseUri(Uri.parse('https://api.example.com/v1'))
      .withDefaultHeader('Accept', 'application/json')
      .withLogging()
      .withRetry(RetryPolicy.exponential(
        maxRetries: 3,
        baseDelay: Duration(milliseconds: 200),
        useJitter: true,
      ))
      .withCircuitBreaker(CircuitBreakerPolicy(
        circuitName: 'my-api',
        failureThreshold: 5,
        breakDuration: Duration(seconds: 30),
      ))
      .withTimeout(TimeoutPolicy(timeout: Duration(seconds: 10)))
      .withBulkhead(BulkheadPolicy(maxConcurrency: 20, maxQueueDepth: 100))
      .build();

  try {
    final body = await client
        .get(Uri.parse('/users/42'))
        .then((r) => r.ensureSuccess().bodyAsJsonMap);
    print(body);
  } finally {
    client.dispose(); // Always dispose when done
  }
}

Option B — Lightweight builder (tests, scripts, one-off clients) #

final client = HttpClientBuilder('catalog')
    .withRetry(RetryPolicy.constant(maxRetries: 2))
    .withTimeout(TimeoutPolicy(timeout: Duration(seconds: 5)))
    .build();

try {
  final response = await client.get(Uri.parse('https://api.example.com/items'));
  print(response.bodyAsString);
} finally {
  client.dispose();
}

Option C — Configuration-driven (enterprise / multi-environment) #

const configJson = '''
{
  "Resilience": {
    "Retry": { "MaxRetries": 3, "Backoff": { "Type": "exponential", "BaseMs": 200, "UseJitter": true } },
    "Timeout": { "Seconds": 10 },
    "CircuitBreaker": { "CircuitName": "api", "FailureThreshold": 5, "BreakSeconds": 30 },
    "BulkheadIsolation": { "MaxConcurrentRequests": 20, "MaxQueueSize": 50 },
    "Hedging": { "HedgeAfterMs": 300, "MaxHedgedAttempts": 2 },
    "Fallback": { "StatusCodes": [500, 502, 503, 504] }
  }
}
''';

const loader = ResilienceConfigLoader();
const binder = ResilienceConfigBinder();

final config = loader.load(configJson);
final pipeline = binder.buildPipeline(config);

final result = await pipeline.execute(() => httpClient.get(uri));

Architecture #

Application code
      │
      ├── ResilienceConfigLoader  ← bind policies from JSON / env at runtime
      │         │
      │    JsonStringConfigSource / InMemoryConfigSource / custom source
      │
      ▼
HttpClientFactory  (named registry)   HttpClientBuilder  (lightweight)
      │                                       │
      └───────────────┬───────────────────────┘
                      ▼
            ResilientHttpClient    ← get / post / put / patch / delete / head / options
                      │
                      ▼
           HttpHandler pipeline    ← ordered chain of DelegatingHandler instances
     ┌─────────────────────────────────────────────────────────┐
     │  LoggingHandler              (outermost — logs, redacts)│
     │  RetryHandler                                           │
     │  CircuitBreakerHandler                                  │
     │  TimeoutHandler                                         │
     │  BulkheadHandler / BulkheadIsolationHandler             │
     │  HedgingHandler              (speculative execution)    │
     │  FallbackHandler             (status/exception fallback)│
     │  TerminalHandler             (innermost — HTTP I/O)     │
     └─────────────────────────────────────────────────────────┘
                      │
                      ▼
              package:http  (http.Client)

Pipeline Execution Order #

Each handler implements:

Future<HttpResponse> send(HttpContext context);

Handlers are linked via DelegatingHandler.innerHandler. The outermost handler executes first; TerminalHandler performs the actual network I/O.

Dependency Direction — Clean Architecture #

factory   →   handlers   →   policies   →   pipeline   →   core

All arrows point inward. No layer has a compile-time dependency on any layer above it.

For a detailed architecture breakdown including state-machine diagrams, dependency graphs, and sequence flows, see doc/architecture.md.


Usage Guide #

Retry Policy #

// Exponential back-off with full jitter, retrying only on 5xx + network errors
final policy = RetryPolicy.exponential(
  maxRetries: 4,
  baseDelay: Duration(milliseconds: 200),
  maxDelay: Duration(seconds: 30),
  useJitter: true,
  shouldRetry: RetryPredicates.serverErrors.or(RetryPredicates.networkErrors),
);

// With onRetry callback for telemetry
final retryPolicy = RetryResiliencePolicy(
  maxRetries: 3,
  backoff: ExponentialBackoff(
    Duration(milliseconds: 200),
    maxDelay: Duration(seconds: 30),
    useJitter: true,
  ),
  onRetry: (attempt, exception) {
    metrics.increment('http.retry', tags: {'attempt': '$attempt'});
  },
);

Circuit Breaker #

final policy = CircuitBreakerPolicy(
  circuitName: 'payments',   // shared across all clients with this name
  failureThreshold: 5,       // 5 consecutive failures trips the breaker
  successThreshold: 2,       // 2 successes in half-open state to close
  breakDuration: Duration(seconds: 30),
);

Timeout Policy #

// 10-second per-attempt deadline
final policy = TimeoutPolicy(timeout: Duration(seconds: 10));

Bulkhead (Concurrency Control) #

final policy = BulkheadPolicy(
  maxConcurrency: 20,
  maxQueueDepth: 100,
  queueTimeout: Duration(seconds: 10),
);

Bulkhead Isolation #

Semaphore-based isolation with rejection callbacks and live metrics:

final policy = BulkheadIsolationPolicy(
  maxConcurrentRequests: 10,
  maxQueueSize: 20,
  queueTimeout: Duration(seconds: 5),
  onRejected: (reason) =>
      log.warning('Bulkhead rejected: $reason'),  // queueFull or queueTimeout
);

final client = HttpClientFactory.create('catalog')
    .withBulkheadIsolation(policy)
    .build();

// Live metrics via the handler
final handler = BulkheadIsolationHandler(policy);
print('active: ${handler.activeCount}');
print('queued: ${handler.queuedCount}');
print('free:   ${handler.semaphore.availableSlots}');

Hedging (Speculative Execution) #

Hedging fires concurrent speculative requests to reduce tail latency. Use only for idempotent operations (GET, HEAD, etc.).

final hedging = HedgingPolicy(
  hedgeAfter: Duration(milliseconds: 300),
  maxHedgedAttempts: 2,
);

final client = HttpClientBuilder('search')
    .withHedging(hedging)
    .build();

Or configure via JSON:

{
  "Hedging": { "HedgeAfterMs": 300, "MaxHedgedAttempts": 2 }
}

Fallback #

Returns a cached or synthetic response when the downstream call fails:

final fallback = FallbackPolicy(
  fallbackAction: (context, error, stackTrace) async =>
      HttpResponse(statusCode: 200, body: utf8.encode('{"cached": true}')),
  shouldHandle: (response, exception, context) {
    if (exception != null) return true;
    return response != null && [500, 502, 503].contains(response.statusCode);
  },
);

final client = HttpClientBuilder('catalog')
    .withFallback(fallback)
    .build();

Or configure the trigger status codes via JSON, supplying only the action programmatically:

{
  "Fallback": { "StatusCodes": [500, 502, 503, 504] }
}
final policy = binder.buildFallbackPolicy(
  config.fallback!,
  fallbackAction: (ctx, err, st) async =>
      HttpResponse(statusCode: 200, body: utf8.encode('{"offline": true}')),
);

Per-Request Streaming #

Override the pipeline's default streaming mode on a per-request basis:

// Stream only this request (no body buffering):
final response = await client.get(
  Uri.parse('/large-file'),
  metadata: {HttpRequest.streamingKey: true},
);

// Force-buffer even when the pipeline default is streaming:
final response = await client.get(
  Uri.parse('/small-resource'),
  metadata: {HttpRequest.streamingKey: false},
);

Custom Handlers #

Extend DelegatingHandler to inject cross-cutting concerns:

final class AuthHandler extends DelegatingHandler {
  AuthHandler(this._tokenProvider) : super.create();
  final Future<String> Function() _tokenProvider;

  @override
  Future<HttpResponse> send(HttpContext context) async {
    final token = await _tokenProvider();
    context.updateRequest(
      context.request.withHeader('Authorization', 'Bearer $token'),
    );
    return innerHandler.send(context);
  }
}

final client = HttpClientBuilder('api')
    .addHandler(AuthHandler(() => tokenService.getAccessToken()))
    .withRetry(RetryPolicy.exponential(maxRetries: 3))
    .build();

DI Container Integration (davianspace_dependencyinjection) #

When used with davianspace_dependencyinjection, HttpClientFactory and typed HTTP clients become first-class injectable services.

dependencies:
  davianspace_http_resilience: ^1.0.3
  davianspace_dependencyinjection: ^1.0.3
import 'package:davianspace_http_resilience/davianspace_http_resilience.dart';
import 'package:davianspace_dependencyinjection/davianspace_dependencyinjection.dart';

// Typed client wrapper
class CatalogService {
  CatalogService(this._client);
  final ResilientHttpClient _client;

  Future<List<dynamic>> getItems() async {
    final r = await _client.get(Uri.parse('/items'));
    return r.ensureSuccess().bodyAsJsonList ?? [];
  }
}

final provider = ServiceCollection()
  ..addHttpClientFactory((factory) {
    factory.configureDefaults((b) => b
        .withDefaultHeader('Accept', 'application/json')
        .withLogging());
  })
  ..addTypedHttpClient<CatalogService>(
    (client) => CatalogService(client),
    clientName: 'catalog',
    configure: (b) => b
        .withBaseUri(Uri.parse('https://catalog.svc/v2'))
        .withRetry(RetryPolicy.exponential(maxRetries: 3))
        .withCircuitBreaker(const CircuitBreakerPolicy(circuitName: 'catalog'))
        .withTimeout(const TimeoutPolicy(timeout: Duration(seconds: 10))),
  )
  .buildServiceProvider();

// Inject CatalogService anywhere — a fresh ResilientHttpClient is
// created per resolution from the shared HttpClientFactory.
final catalog = provider.getRequired<CatalogService>();
final items = await catalog.getItems();
Method Registered type Lifetime
addHttpClientFactory() HttpClientFactory Singleton
addTypedHttpClient<T>() T Transient

Named Client Factory #

Register once, resolve anywhere:

// Register in main() or DI container
final factory = HttpClientFactory();

factory.addClient(
  'catalog',
  (b) => b
      .withBaseUri(Uri.parse('https://catalog.internal/v1'))
      .withRetry(RetryPolicy.exponential(maxRetries: 3))
      .withCircuitBreaker(CircuitBreakerPolicy(circuitName: 'catalog'))
      .withLogging(),
);

// Resolve anywhere in the application
final client = factory.createClient('catalog');

Cancellation #

final token = CancellationToken();

// Cancel from UI or lifecycle event
onDispose: () => token.cancel('widget disposed');

// Pass to the request context
final context = HttpContext(
  request: HttpRequest(method: HttpMethod.get, uri: uri),
  cancellationToken: token,
);
final response = await pipeline.send(context);

Response Helpers #

final response = await client.post(
  Uri.parse('/orders'),
  body: jsonEncode(order),
  headers: {'Content-Type': 'application/json'},
);

// Throws HttpStatusException for non-2xx
final map  = response.ensureSuccess().bodyAsJsonMap;
final list = response.ensureSuccess().bodyAsJsonList;
final text = response.bodyAsString;

Transport-Agnostic Policy Engine #

Policy and PolicyWrap provide resilience logic independent of HTTP — wrap any async operation (database calls, file I/O, gRPC, etc.):

// Build via the Policy factory
final policy = Policy.wrap([
  Policy.retry(maxRetries: 3),
  Policy.timeout(duration: Duration(seconds: 5)),
  Policy.bulkheadIsolation(maxConcurrentRequests: 10),
]);

final result = await policy.execute(() async {
  return await myService.fetchData();
});

// Or use the fluent builder
final pipeline = ResiliencePipelineBuilder()
    .addPolicy(RetryResiliencePolicy(maxRetries: 3))
    .addPolicy(TimeoutResiliencePolicy(timeout: Duration(seconds: 5)))
    .build();

final result = await pipeline.execute(() => someOperation());

Configuration-Driven Policies #

Load all policy parameters from JSON — ideal for feature flags, environment-specific tuning, and dynamic reconfiguration:

const json = '''
{
  "Resilience": {
    "Retry": {
      "MaxRetries": 3,
      "RetryForever": false,
      "Backoff": {
        "Type": "exponential",
        "BaseMs": 200,
        "MaxDelayMs": 30000,
        "UseJitter": true
      }
    },
    "Timeout": { "Seconds": 10 },
    "CircuitBreaker": {
      "CircuitName": "api",
      "FailureThreshold": 5,
      "SuccessThreshold": 1,
      "BreakSeconds": 30
    },
    "Bulkhead": { "MaxConcurrency": 20, "MaxQueueDepth": 100 },
    "BulkheadIsolation": { "MaxConcurrentRequests": 10, "MaxQueueSize": 50 },
    "Hedging": { "HedgeAfterMs": 300, "MaxHedgedAttempts": 2 },
    "Fallback": { "StatusCodes": [500, 502, 503, 504] }
  }
}
''';

const loader = ResilienceConfigLoader();
const binder = ResilienceConfigBinder();

final config = loader.load(json);

// Build a composed pipeline from all configured sections
final pipeline = binder.buildPipeline(config);

// Or build individual policies
final retry   = binder.buildRetry(config.retry!);
final timeout = binder.buildTimeout(config.timeout!);
final hedging = binder.buildHedging(config.hedging!);
final fallback = binder.buildFallbackPolicy(
  config.fallback!,
  fallbackAction: (ctx, err, st) async => HttpResponse(statusCode: 200),
);

// Register all policies into a named registry in one call
PolicyRegistry.instance.loadFromConfig(config);

Observability & Events #

final hub = ResilienceEventHub();

hub.stream.listen((event) {
  switch (event) {
    case RetryEvent():
      metrics.increment('resilience.retry', tags: {'attempt': '${event.attempt}'});
    case CircuitOpenEvent():
      alerting.fire('circuit-open', circuit: event.circuitName);
    case TimeoutEvent():
      metrics.increment('resilience.timeout');
    case FallbackEvent():
      metrics.increment('resilience.fallback');
    case BulkheadRejectedEvent():
      metrics.increment('resilience.bulkhead_rejected');
    default:
      break;
  }
});

Health Checks & Monitoring #

Use CircuitBreakerRegistry for readiness/liveness probes:

final registry = CircuitBreakerRegistry.instance;

// Overall health of all circuits (bool getter)
final allHealthy = registry.isHealthy;          // true when ALL circuits are Closed

// Point-in-time state for each circuit
final snap = registry.snapshot;                 // Map<String, CircuitState>
for (final e in snap.entries) {
  print('${e.key}: ${e.value}');                // e.g. "payments: CircuitState.closed"
}

// Per-circuit access
final names   = registry.circuitNames;          // Iterable<String>
final hasSvc  = registry.contains('payments'); // bool
final state   = registry['payments']?.state;   // CircuitState? for one circuit

// Expose in a health endpoint
app.get('/health', (req, res) {
  final circuits = registry.snapshot.map(
    (name, state) => MapEntry(name, state == CircuitState.closed),
  );
  final healthy = registry.isHealthy;
  res.statusCode = healthy ? 200 : 503;
  res.json({'status': healthy ? 'healthy' : 'degraded', 'circuits': circuits});
});

Header Redaction (Security Logging) #

LoggingHandler automatically redacts sensitive headers in log output:

final client = HttpClientBuilder('secure-api')
    .addHandler(LoggingHandler(
      logHeaders: true,
      // Default redacted set: authorization, proxy-authorization,
      // cookie, set-cookie, x-api-key
      redactedHeaders: {
        'authorization',
        'x-api-key',
        'x-custom-secret',  // Add your own
      },
    ))
    .withRetry(RetryPolicy.exponential(maxRetries: 3))
    .build();

Rate Limiter Integration #

The companion package davianspace_http_ratelimit extends HttpClientBuilder with a .withRateLimit() method that supports six algorithms. Import both packages:

dependencies:
  davianspace_http_resilience: ^1.0.3
  davianspace_http_ratelimit:  ^1.0.3
import 'package:davianspace_http_resilience/davianspace_http_resilience.dart';
import 'package:davianspace_http_ratelimit/davianspace_http_ratelimit.dart';

// Token Bucket — burst up to 200, sustain 100 req/s
final client = HttpClientBuilder('my-api')
    .withBaseUri(Uri.parse('https://api.example.com'))
    .withLogging()
    .withRateLimit(RateLimitPolicy(
      limiter: TokenBucketRateLimiter(
        capacity: 200,
        refillAmount: 100,
        refillInterval: Duration(seconds: 1),
      ),
      acquireTimeout: Duration(milliseconds: 500),
      respectServerHeaders: true,         // parses X-RateLimit-* response headers
      onRejected: (ctx, e) => log.warning('Rate limited: $e'),
    ))
    .withRetry(RetryPolicy.exponential(maxRetries: 3)) // retried calls also throttled
    .withCircuitBreaker(CircuitBreakerPolicy(circuitName: 'api'))
    .build();

try {
  final response = await client.get(Uri.parse('/v1/items'));
  print(response.bodyAsString);
} on RateLimitExceededException catch (e) {
  print('Rate limited: ${e.retryAfter}');
} finally {
  client.dispose();
}

Six available algorithms:

Class Algorithm Memory Burst
TokenBucketRateLimiter Token Bucket O(1) ✅ FIFO queue
FixedWindowRateLimiter Fixed Window O(1) ⚠️ edge burst
SlidingWindowRateLimiter Sliding Window Counter O(1) ✅ approximate
SlidingWindowLogRateLimiter Sliding Window Log O(n) ✅ exact
LeakyBucketRateLimiter Leaky Bucket O(cap) ✅ constant output
ConcurrencyLimiter Semaphore O(1) ✅ FIFO queue

Server-side per-key admission control is also available via ServerRateLimiter. See the companion package README for full documentation.


Lifecycle & Disposal #

All stateful resources implement dispose() for deterministic cleanup:

// Client disposal (disposes pipeline + underlying http.Client)
final client = HttpClientBuilder('api').build();
try {
  await client.get(uri);
} finally {
  client.dispose();
}

// Policy disposal (disposes circuit-breaker state, semaphores, etc.)
final policy = Policy.wrap([
  Policy.retry(maxRetries: 3),
  Policy.circuitBreaker(circuitName: 'svc'),
]);
try {
  await policy.execute(() => fetchData());
} finally {
  policy.dispose();
}

// Factory disposal
final factory = HttpClientFactory();
factory.addClient('svc', (b) => b.withRetry(RetryPolicy.constant(maxRetries: 2)));
// ... later
factory.clear(); // disposes all registered clients

Testing #

# Run the full test suite (926 tests)
dart test

# Run with coverage
dart test --coverage=coverage

# Static analysis (zero issues required)
dart analyze --fatal-infos

The test suite covers:

Area Files What's Tested
Core 3 Request/response immutability, streaming, copy-with
Config 1 JSON parsing, all seven section parsers, edge cases
Factory 3 Fluent builder, named factory registry, verb helpers
Handlers 2 Hedging handler, structured logging + redaction
Observability 2 Event hub, error events, event types
Pipeline 3 Handler chaining, integration, pipeline builder
Policies 3 Circuit breaker sliding window, retry features, all policy configs
Resilience 8 Advanced retry, bulkhead isolation, concurrency stress, fallback, outcome classification, policy registry, namespaces

API Reference #

Core Types #

Type Role
HttpRequest Immutable outgoing request model with metadata bag
HttpResponse Immutable response model with streaming support
HttpContext Mutable per-request execution context (flows through pipeline)
CancellationToken Cooperative cancellation with memoised onCancelled future

Pipeline #

Type Role
HttpHandler Abstract pipeline handler
DelegatingHandler Middleware base with innerHandler chaining
TerminalHandler Innermost handler — performs HTTP I/O

Policies (Handler-Level Configuration) #

Type Role
RetryPolicy Retry strategy: constant / linear / exponential back-off
CircuitBreakerPolicy Threshold + break-duration circuit control
TimeoutPolicy Per-attempt deadline
BulkheadPolicy Max concurrency + queue depth
BulkheadIsolationPolicy Semaphore-based isolation with rejection callbacks
HedgingPolicy Speculative execution for tail-latency reduction
FallbackPolicy Fallback action triggered by status code or exception

Resilience Engine (Transport-Agnostic) #

Type Role
ResiliencePolicy Abstract composable policy base with dispose()
RetryResiliencePolicy Free-standing retry with back-off and onRetry callback
CircuitBreakerResiliencePolicy Free-standing circuit breaker
TimeoutResiliencePolicy Free-standing timeout
BulkheadResiliencePolicy Free-standing concurrency limiter
BulkheadIsolationResiliencePolicy Free-standing isolation with zero-polling semaphore
FallbackResiliencePolicy Free-standing fallback
Policy Static factory for all resilience policies
PolicyWrap Composable multi-policy pipeline with introspection and dispose()
ResiliencePipelineBuilder Fluent builder for PolicyWrap
PolicyRegistry Named policy store with typed resolution

Configuration #

Type Role
ResilienceConfig Immutable top-level config (7 optional sections)
ResilienceConfigLoader Parses JSON → ResilienceConfig
ResilienceConfigBinder Binds config → policy instances
ResilienceConfigSource Abstraction for static/dynamic config sources
JsonStringConfigSource Static config source backed by a JSON string
InMemoryConfigSource Dynamic config source with live-update support

Observability #

Type Role
ResilienceEventHub Centralized event bus (scheduleMicrotask dispatch)
ResilienceEvent Sealed base class for all lifecycle events
CircuitBreakerRegistry Circuit health checks, snapshot, and enumeration

Client Factory #

Type Role
HttpClientFactory Named + typed client factory with lifecycle management
HttpClientBuilder Fluent pipeline builder for ResilientHttpClient
FluentHttpClientBuilder Immutable fluent DSL
ResilientHttpClient High-level HTTP client with verb helpers

Migration Guide #

From 1.0.2 → 1.0.3 #

No breaking changes. Version 1.0.3 is fully backward-compatible.

New features and changes after upgrading:

  1. DI container integrationdavianspace_dependencyinjection ^1.0.3 is now a runtime dependency. Two extension methods on ServiceCollection make HttpClientFactory and typed HTTP clients injectable:

    • addHttpClientFactory([configure]) — singleton HttpClientFactory with optional factory-level configuration.
    • addTypedHttpClient<TClient>(create, {clientName, configure}) — transient typed client resolved from the shared factory. Existing code that does not use DI is completely unaffected.
  2. loggingdavianspace_loggingpackage:logging has been replaced by davianspace_logging ^1.0.3. LoggingHandler now accepts a davianspace_logging.Logger instead of logging.Logger. If you were passing a logging.Logger to withLogging(logger: ...) you must switch to a davianspace_logging.Logger (created via LoggingBuilder or injected via DI). The default is NullLogger — a no-op logger.

  3. meta removed@immutable and @internal annotations have been dropped. No source changes are required on your side.


From 1.0.1 → 1.0.2 #

No breaking changes. Version 1.0.2 is fully backward-compatible.

New features available after upgrading:

  1. Rate limiter companion packagedavianspace_http_ratelimit v1.0.0 adds withRateLimit(RateLimitPolicy) to the HttpClientBuilder fluent API via an extension. Six algorithms are available: Token Bucket, Fixed Window, Sliding Window (counter), Sliding Window Log, Leaky Bucket, and Concurrency Limiter. Server-side per-key admission control (ServerRateLimiter) is also included. Add davianspace_http_ratelimit: ^1.0.0 to your pubspec.yaml to opt in.

From 1.0.0 → 1.0.1 #

No breaking changes. Version 1.0.1 is fully backward-compatible.

New features available after upgrading:

  1. ResiliencePolicy.dispose() — Call dispose() on policies when done. Existing code that does not call dispose() will continue to work but may leak resources in long-running processes.

  2. Hedging/Fallback configResilienceConfigLoader now recognises "Hedging" and "Fallback" JSON sections. Existing configs without these sections are unaffected.

  3. Per-request streaming — Set metadata: {HttpRequest.streamingKey: true} on any verb call. The default behaviour (handler-level streamingMode) is unchanged when the key is absent.

  4. Header redactionLoggingHandler now accepts redactedHeaders and logHeaders. Existing LoggingHandler() calls use the default redaction set automatically.

  5. onRetry callbackRetryResiliencePolicy accepts an optional onRetry callback. Existing code without it is unaffected.


Contributing #

See CONTRIBUTING.md for development setup, coding standards, and pull-request guidelines.


Security #

For security concerns and responsible disclosure, see SECURITY.md.


License #

MIT — see LICENSE.

0
likes
160
points
278
downloads

Documentation

API reference

Publisher

verified publisherdavian.space

Weekly Downloads

Production-grade HTTP resilience for Dart & Flutter. Retry, circuit breaker, timeout, bulkhead, hedging, fallback, structured observability, and header-redacted logging.

Repository (GitHub)
View/report issues
Contributing

License

MIT (license)

Dependencies

davianspace_dependencyinjection, davianspace_logging, http

More

Packages that depend on davianspace_http_resilience