rpc_dart 2.6.2
rpc_dart: ^2.6.2 copied to clipboard
Transport-agnostic RPC framework with contracts, streaming and included transports.
2.6.2 #
RpcMessageParser: replaced per-message buffer slicing with a read-offset approach —advance()moves a pointer in O(1) and a singlecompact()at the end of each parse pass drops consumed bytes in O(remaining), eliminating the previous O(N²) copy behaviour when multiple gRPC frames arrive in a single chunk (relevant for HTTP/2 and WebSocket transports).RpcMessageParser: fixed a latent bug where the loop conditionbuffer.length >= 5prevented re-entry after a 5-byte header arrived without a body shorter than 5 bytes, causing such messages to stall in the buffer indefinitely.
2.6.1 #
IRpcServer: removedhostandportfrom the interface — these are transport-level concerns, not RPC server concerns.IRpcServerFactoryinterface removed along withRpcHttp2ServerFactoryandRpcWebSocketServerFactory— the abstraction had no consumers.
2.6.0 #
- Added
RpcHeadersclass with gRPC semantic header name constants; all internal metadata and protocol code now usesRpcHeaders.*instead of raw strings. - Core parser passes compressed frames through when no decompressor is configured so the application layer (transport) can handle decompression — core stays agnostic.
- Transport packages now own their own wire-format: pseudo-headers (
:method,:path,:status, etc.) are generated by each transport and never stored inRpcMetadata;http2HeadersToRpcMetadatafilters them out automatically. RpcHttpCorsPolicy: always exposesgrpc-encoding,grpc-status,grpc-message,grpc-accept-encodinginAccess-Control-Expose-Headersso browsers do not block them in cross-origin responses.RpcHttpCorsPolicy: always allowsgrpc-timeout,grpc-encoding,grpc-accept-encodinginAccess-Control-Allow-Headers.RpcHttpCorsPolicy:exposedHeadersparameter renamed toextraExposedHeadersto clarify it is additive on top of the required gRPC headers.RpcCallerEndpoint: injectsgrpc-accept-encodinginto context based oncompressionEnabledso globally-registered codecs do not force server-side compression when the endpoint has compression disabled.- WebSocket transport: metadata is now encoded as binary CBOR (
WsMetadataCodec) instead of JSON — ~40 % more compact, self-describing, no extra dependencies; wire format:{"h":[["name","val"],...],"p":"/Svc/Method"}. - WebSocket transport:
_ensureGrpcFramenormalises parser output to gRPC frames, consistent with HTTP/2 transport delivery toincomingMessages. - Exported
CborCodecfromrpc_dartfor use by transport packages.
2.5.1 #
- Compression is now pluggable via
RpcCompressionCodecinterface andRpcGrpcCompression.register(). - Native dart:io gzip continues to auto-register on startup as before.
- External packages (e.g.
rpc_dart_compression) can now register codecs on web/JS/Wasm.
2.5.0 #
- Added gzip compression support:
RpcCallerEndpointnow has acompressionEnabledflag (defaulttrue) that automatically injectsgrpc-encoding: gzipfor non-zero-copy transports (e.g. network transports). StreamProcessorauto-detects request/response encoding from incoming metadata (grpc-encoding/grpc-accept-encoding), enabling transparent decompression on the server side.UnaryRespondercapturesgrpc-accept-encodingper stream to compress unary responses when the client advertises support.RpcMetadata.forServerInitialResponsenow accepts an optionalencodingparameter to includegrpc-encodingin the initial server response headers for streaming compression.- Added compression tests for all stream types (unary, server streaming, client streaming, bidirectional).
- Fix:
UnaryRespondernow capturesgrpc-encodingfrom incoming client metadata per stream and uses it for decompression, matching the behaviour ofStreamProcessor. - Fix: caller and streaming processors now validate
grpc-encodingagainst supported encodings before compressing and throwRpcExceptionwith a clear message instead ofUnsupportedError. - Fix:
UnaryRespondernow throwsRpcException(was bareException) when the request frame is empty. - Refactor: per-stream state in
UnaryResponderconsolidated from five parallel maps into a single_UnaryStreamStateobject — cleanup is now atomic. - Refactor: response-encoding negotiation logic extracted to
RpcGrpcCompression.selectResponseEncoding()and reused by bothUnaryResponderandStreamProcessor. - Refactor:
RpcCallerEndpoint._effectiveContextrenamed to_prepareContextfor clarity.
2.4.1 #
- Add interface for transport server
2.4.0 #
- Breaking: removed transport toolkit (
transport_toolkit.dart) from the public API surface. - Added
RpcSecurityPolicyto centralize transport/parser limits and metadata validation. - Security:
grpc-messageis now percent-encoded per gRPC HTTP/2 spec; addeddecodeGrpcMessagehelper. - Protocol: client request metadata now includes
grpc-accept-encoding(advertises supported message encodings). - Fix: clean up
RpcCallerEndpointcancellation token registry after completed calls (prevents memory growth in long-lived clients). - Security:
RpcInMemoryTransportnow enforces policy limits (metadata validation, active stream cap);pair()accepts an optionalpolicy.
2.3.4 #
- fix lints package compatibility
2.3.3 #
- fix test package compatibility
2.3.2 #
- Added annotations for codegeneration
2.3.1 #
- Optimize ping requests
- Introduced a unified middleware and interceptor pipeline across caller and responder endpoints, ensuring context propagation for unary and streaming RPC flows.
- Added
RpcMiddlewareContextenhancements and default async hooks so extensions can enrich request/response handling without touching the core. - Hardened serialization by validating codec decoders and shipping a
pluggable
RpcBinaryCodecfor external binary formats such as protobuf. - Expanded the test suite with exhaustive pipeline coverage for all RPC shapes and binary codec scenarios.
2.3.0 #
- Added aggregated diagnostics for caller and responder endpoints with
health()/reconnect()snapshots that combine endpoint metrics and transport status viaRpcEndpointHealthandRpcHealthStatus. - Implemented a built-in ping protocol between endpoints that exposes
round-trip timing, responder metadata and debug labels through
RpcCallerEndpoint.ping(). - Extended transport infrastructure with
IRpcTransport.health()andIRpcTransport.reconnect()plus detailed implementations forRpcInMemoryTransport,RpcTransportRouterand the transport toolkit, including automatic partner shutdown to avoid close deadlocks. - Improved
RpcStreamIdManagerso stream identifiers are recycled after hitting the HTTP/2 limit, preventing allocation failures during long-lived workloads. - Updated documentation and examples with health monitoring guidance and diagnostics-driven workflows.
2.2.2 #
- Docs: simplify everything
2.2.1 #
- Fix: remove base class modifier to allow mock contracts
2.2.0 #
- Added support for cancellation in caller/responder
- Fix timeout passing through headers
2.1.1 #
- Update logo
2.1.0 #
- Added dispose base method to responder to cleanup resources
2.0.0 #
- Updated license to MIT
- Added logo to the package
- Added readme translations (RU, EN)
1.8.0 #
- Added ability to specify data transfer mode in contract (zero-copy, codec, auto)
1.7.0 #
- Added ability to specify whether transport supports Zero-Copy
1.6.0 #
- Added Zero-Copy optimization for
RpcInMemoryTransport- object transfer without serialization/deserialization
1.5.0 #
- Added Transport Router for smart routing of RPC calls between transports
- Added typedef RpcRoutingCondition for typing routing condition functions
- Added support for routing rules with priorities (routeCall, routeWhen)
- Added conditional routing capability with access to RpcContext
- Added automatic validation of transport roles (client/server)
- Implemented correct Stream ID routing between transports
- Added router statistics and detailed logging
- Updated documentation with Transport Router usage examples
RpcLoggerSettings->RpcLoggerRpcContextPropagation->RpcContext
1.4.0 #
- Added RpcContext API with full gRPC-style context support
- Added support for headers, metadata, deadline and timeout
- Added distributed tracing support with trace ID
- Added
internallogging level for library internal details - Eliminated log duplication, library is "silent" by default
- Optimized InMemoryTransport for improved performance
- Fixed race conditions and deadlock situations
- Increased reliability of tests and CI/CD pipeline
1.3.2 #
- Removed auto-start of Responders
- Removed bundleId from
StreamDistributor - Updated documentation
1.3.1 #
- Optimized CBOR serializer and deserializer
- Added benchmarks for performance testing
1.3.0 #
- Updated documentation
1.2.2 #
- Added 1ms delay for data transfer stability
1.2.1 #
- Fixed specific errors in rpc-method operations (timeouts)
1.2.0 #
- Fixed critical bug with stream processing in Stream Processor
- Added explicit support for
bindToMessageStream()method for manual stream binding - Improved error handling in streams through gRPC statuses in metadata
- Fixed deadlock situations in client, server and bidirectional streams
- Optimized timeouts in tests for faster execution
- Improved documentation on working with streams and error handling
- Fixed issue with double stream listening in ClientStreamResponder
1.1.0 #
- Added
RpcStreamIdManagerfor stream ID management
1.0.3 #
- CBOR serializer now works only with
Map<String, dynamic>
1.0.2 #
- Added
StreamDistributor - Fixed linter issues
1.0.1 #
- Added subcontract registration
- Fixed unary method operations
1.0.0 #
- First stable release
- Implemented contract-based Backend-for-Domain (BFD) architecture
- Added support for all RPC types: unary calls, server streaming, client streaming, bidirectional streaming
- Added efficient CBOR serialization
- Added primitive types (String, Int, Double, Bool, Null) with operator support
- Implemented extensible logging system with color and level support
- Added universal transports: InMemoryTransport and IsolateTransport
- Implemented timeout handling and informative errors
- Main package contains only platform-independent transports, platform-specific ones will be available in separate packages
0.2.0 #
- Improved stream handling (BidiStream, ClientStreamingBidiStream, ServerStreamingBidiStream)
- Added support for diagnostic metrics and monitoring
- Improved marker handling in streams for more reliable interaction
- Added typed markers for various operations (stream completion, timeouts, etc.)
- Improved error handling and status transfer between client and server
- Optimized metadata handling in requests and responses
- Improved deadline and timeout handling in RPC operations
- Added operation cancellation mechanism
0.1.1 #
- Fixed error when registering contracts
- Added MsgPack serializer
0.1.0 #
- Initial release