daredis 0.1.0
daredis: ^0.1.0 copied to clipboard
A Redis client for Dart with connection pooling, cluster support, dedicated Pub/Sub and transaction sessions, and typed helper APIs.
Daredis #
Daredis is a Redis client for Dart with:
- single-node and cluster clients
- connection pooling
- dedicated Pub/Sub sessions
- dedicated transaction sessions
- typed command helpers on top of raw Redis replies
It is designed around a simple rule:
- normal commands use pooled connections
- Pub/Sub uses a dedicated connection
- transactions use a dedicated connection
- cluster commands route to the correct node and use per-node pools
Features #
- Single-node Redis client:
Daredis - Redis Cluster client:
DaredisCluster - Connection pooling with timeouts, idle eviction, retry, and stats
- Dedicated Pub/Sub session API with reconnect support
- Dedicated transaction session API for single-node Redis
- Pipeline support
- Typed helper APIs for
COMMAND,FUNCTION,XINFO,ROLE, and more - Raw
sendCommand()escape hatch when you need low-level access - TLS/SSL, AUTH, and ACL-friendly connection options
- RESP decoding with Redis error mapping to custom exceptions
Installation #
Add daredis to your pubspec.yaml:
dependencies:
daredis: ^0.1.0
Then run:
dart pub get
Examples #
Ready-to-run examples live in:
example/single_node.dartexample/cluster.dartexample/sessions.dart
Run one with:
dart run example/single_node.dart
Quick Start #
Single Node #
import 'package:daredis/daredis.dart';
Future<void> main() async {
final client = Daredis(
options: const ConnectionOptions(
host: '127.0.0.1',
port: 6379,
),
);
await client.connect();
try {
const key = 'example:greeting';
await client.set(key, 'hello from daredis');
final value = await client.get(key);
print('Stored value: $value');
await client.hSet('example:user:1', 'name', 'alice');
await client.hSet('example:user:1', 'city', 'shanghai');
final user = await client.hGetAll('example:user:1');
print('User hash: $user');
} finally {
await client.close();
}
}
Cluster #
Use hash tags when multiple keys must stay in the same slot.
import 'package:daredis/daredis.dart';
Future<void> main() async {
final cluster = DaredisCluster(
options: ClusterOptions(
seeds: const [
ClusterNode('127.0.0.1', 7000),
ClusterNode('127.0.0.1', 7001),
],
nodePoolSize: 8,
),
);
await cluster.connect();
try {
await cluster.set('cart:{42}:total', '199');
print(await cluster.get('cart:{42}:total'));
} finally {
await cluster.close();
}
}
Cluster transactions should also be scoped to one slot. Open them with a routing key that already carries the hash tag you want to pin:
final tx = await cluster.openTransaction('cart:{42}:total');
try {
await tx.watch(['cart:{42}:total', 'cart:{42}:items']);
await tx.multi();
await tx.sendCommand(['SET', 'cart:{42}:total', '199']);
await tx.sendCommand(['SET', 'cart:{42}:items', '3']);
print(await tx.exec());
} finally {
await tx.close();
}
The routing key only selects the slot and node. Subsequent keyed commands do not need to include that exact key, but they must continue to target the same slot. The client does not emulate cross-slot transaction behavior.
Client Model #
The library uses different connection strategies for different workloads.
Daredis
-> Pool<Connection>
DaredisCluster
-> slot-aware router
-> per-node Pool<Connection>
openPubSub()
-> dedicated Connection
openTransaction()
-> dedicated Connection
Why this matters:
- ordinary commands can safely share pooled connections
- Pub/Sub cannot share a normal command connection once subscribed
WATCH/MULTI/EXECmust stay on the same connection- cluster routing pins keyed work to the correct node pool
Command Surface Design #
The package models command availability through concrete client/session types.
Daredisexposes the normal command groups for pooled single-node accessDaredisClusterexposes the normal command groups plus cluster-only helpersRedisTransactionexposes transaction commands likeWATCH,MULTI, andEXEC
This keeps command availability aligned with the underlying connection model instead of exposing every command on every executor shape.
Cluster Multi-Key Rules #
DaredisCluster preserves native Redis Cluster command semantics.
- Multi-key commands work when all keys hash to the same slot
- Cross-slot multi-key commands fail early on the client
- The client does not scatter, merge, or emulate cross-slot command behavior
- The same rule applies inside cluster transactions opened with a routing key
Use hash tags such as {42} when related keys must be used together:
await cluster.mSet({
'cart:{42}:total': '199',
'cart:{42}:items': '3',
});
print(await cluster.mGet([
'cart:{42}:total',
'cart:{42}:items',
]));
Connection Options #
final options = ConnectionOptions(
host: '127.0.0.1',
port: 6379,
username: 'default',
password: 'secret',
useSsl: false,
connectTimeout: const Duration(seconds: 5),
commandTimeout: const Duration(seconds: 30),
reconnectPolicy: const ReconnectPolicy(
maxAttempts: 5,
delay: Duration(seconds: 2),
),
);
Single-Node Usage #
Basic Commands #
await client.set('key', 'value');
print(await client.get('key'));
await client.hSet('user:1', 'name', 'alice');
print(await client.hGetAll('user:1'));
await client.rPush('jobs', ['a', 'b']);
print(await client.lRange('jobs', 0, -1));
await client.sAdd('tags', ['dart', 'redis']);
print(await client.sMembers('tags'));
Pipeline #
Use a pipeline when you want to batch commands on one connection and collect their replies in order.
On Redis Cluster, all keyed commands in one pipeline must route to the same node. Use hash tags when related keys need to stay together.
final pipeline = client.pipeline();
pipeline.add(['SET', 'key1', 'v1']);
pipeline.add(['GET', 'key1']);
pipeline.add(['INCR', 'counter']);
final results = await pipeline.execute();
print(results);
Transactions #
Transactions are exposed as a dedicated session because WATCH, MULTI, and
EXEC must run on the same connection.
Those transactional commands are intentionally exposed on RedisTransaction,
not on the pooled Daredis client itself.
final tx = await client.openTransaction();
try {
await tx.watch(['account:1']);
await tx.multi();
await tx.set('account:1', 'updated');
final replies = await tx.exec();
print(replies);
} finally {
await tx.close();
}
DaredisCluster supports transactions only as explicit single-slot sessions.
Open them with a routing key so the session can pin itself to one slot and one
node.
Transaction sessions are single-use. After close(), open a fresh session
with openTransaction() instead of reconnecting the old one.
Pub/Sub #
Pub/Sub also uses a dedicated connection.
final pubsub = await client.openPubSub();
await pubsub.subscribe(['news']);
final sub = pubsub.dataMessages.listen((message) {
print('channel=${message.channel} payload=${message.payload}');
});
await client.sendCommand(['PUBLISH', 'news', 'hello world']);
await sub.cancel();
await pubsub.close();
RedisPubSub.close() is terminal for that session. After closing, the message
stream finishes and the same session cannot be reopened.
You can also consume messages in a pull style:
final message = await pubsub.getMessage(
timeout: const Duration(seconds: 1),
ignoreSubscriptionMessages: true,
);
Cluster Usage #
Same-Slot Multi-Key Operations #
Redis Cluster requires multi-key commands to stay in one slot. Use hash tags:
await cluster.mSet({
'profile:{7}:name': 'alice',
'profile:{7}:city': 'shanghai',
});
final values = await cluster.mGet([
'profile:{7}:name',
'profile:{7}:city',
]);
print(values); // [alice, shanghai]
Cluster Metadata #
final info = await cluster.clusterInfo();
print(info['cluster_state']);
final slot = await cluster.clusterKeyslot('profile:{7}:name');
print(slot);
final ranges = await cluster.clusterSlotRanges();
print(ranges.first.primary);
Pool Configuration #
Daredis uses a pool of Connection objects for normal commands.
final client = Daredis(
options: const ConnectionOptions(host: '127.0.0.1', port: 6379),
poolSize: 10,
testOnBorrow: true,
testOnReturn: false,
maxWaiters: 500,
acquireTimeout: const Duration(seconds: 5),
idleTimeout: const Duration(seconds: 30),
evictionInterval: const Duration(seconds: 10),
createMaxAttempts: 3,
createRetryDelay: const Duration(milliseconds: 100),
useLifo: true,
);
Pool Stats #
final stats = client.poolStats;
print(stats.total);
print(stats.idle);
print(stats.inUse);
print(stats.creating);
print(stats.waiters);
print(stats.createdCount);
print(stats.disposedCount);
print(stats.createFailureCount);
print(stats.lastEvictionAt);
print(stats.lastCreateFailureAt);
Recommended production-style defaults:
idleTimeout: 30sevictionInterval: 10screateMaxAttempts: 3createRetryDelay: 100msuseLifo: trueif you want to prefer hot connection reuse- set
maxWaitersexplicitly for latency-sensitive services
Cluster Pool Configuration #
Cluster keeps one connection pool per discovered Redis node.
Routing stays inside the cluster client, so nodePoolSize directly controls
the per-node concurrency budget.
final cluster = DaredisCluster(
options: ClusterOptions(
seeds: const [
ClusterNode('127.0.0.1', 7000),
ClusterNode('127.0.0.1', 7001),
],
nodePoolSize: 8,
poolMaxWaiters: 500,
poolAcquireTimeout: const Duration(seconds: 5),
poolIdleTimeout: const Duration(seconds: 30),
poolEvictionInterval: const Duration(seconds: 10),
poolCreateMaxAttempts: 3,
poolCreateRetryDelay: const Duration(milliseconds: 100),
poolUseLifo: true,
),
);
Typed Helper APIs #
sendCommand() is still available as a low-level escape hatch, but for most
application code the typed helpers are easier to read and maintain.
Command Metadata #
final docs = await client.commandDocEntriesFor(['SET']);
print(docs.first.name);
print(docs.first.summary);
print(docs.first.arguments.first.name);
final info = await client.commandInfoEntriesFor(['GET']);
print(info.first.name);
print(info.first.flags);
print(info.first.categories);
print(info.first.firstKey);
Stream Metadata #
final streamInfo = await client.xInfoStreamEntry('orders');
print(streamInfo.length);
final groups = await client.xInfoGroupEntries('orders');
for (final group in groups) {
print('${group.name} pending=${group.pending}');
}
final consumers = await client.xInfoConsumerEntries('orders', 'group-a');
for (final consumer in consumers) {
print('${consumer.name} idle=${consumer.idle}');
}
Functions #
final libraries = await client.functionLibraryEntries();
for (final library in libraries) {
print(library.libraryName);
for (final function in library.functions) {
print(function.name);
}
}
final stats = await client.functionStatsEntry();
print(stats.runningScript?.functionName);
print(stats.engines['LUA']?.librariesCount);
print(stats.engines['LUA']?.functionsCount);
Role #
final role = await client.roleInfo();
print(role.role);
if (role.role == 'master') {
print(role.replicas.length);
}
Scripting Helpers #
Raw script commands are available:
eval(...)evalRo(...)evalSha(...)evalShaRo(...)
There are also typed convenience helpers for common result shapes:
evalString(...)evalInt(...)evalListString(...)evalRoString(...)evalRoInt(...)evalRoListString(...)evalShaString(...)evalShaInt(...)evalShaListString(...)evalShaRoString(...)evalShaRoInt(...)evalShaRoListString(...)
Example:
final sha = await client.scriptLoad("return redis.call('GET', KEYS[1])");
final value = await client.evalShaString(
sha,
1,
['user:1:name'],
const [],
);
print(value);
TLS / SSL #
final client = Daredis(
options: const ConnectionOptions(
host: 'your-redis-server',
port: 6380,
useSsl: true,
),
);
Exceptions #
The library maps Redis and client failures to custom exception types:
DaredisConnectionExceptionDaredisTimeoutExceptionDaredisNetworkExceptionDaredisCommandExceptionDaredisClusterExceptionDaredisStateExceptionDaredisArgumentExceptionDaredisUnsupportedExceptionDaredisProtocolException
Example:
try {
await client.set('key', 'value');
} on DaredisCommandException catch (e) {
print(e);
}
Low-Level Escape Hatch #
If you need a Redis command that does not yet have a high-level helper, use
sendCommand():
final reply = await client.sendCommand(['PING']);
print(reply);
This is intentionally kept available, but for readability and long-term maintainability it is better to prefer the typed helpers when they exist.
Supported High-Level Areas #
The library already includes helpers for:
- strings
- keys
- lists
- hashes
- sets
- sorted sets
- streams
- server and command metadata
- scripting
- geo
- hyperloglog
- cluster metadata and routing helpers
Testing #
The project includes:
- unit tests for pool, cluster routing, redirect handling, and Pub/Sub helpers
- integration tests for single-node Redis
- integration tests for Redis Cluster
Typical commands:
dart analyze
dart test
License #
This project is licensed under the MIT License. See
LICENSE.
Current Design Notes #
Daredisis the top-level single-node clientDaredisClusteris the top-level cluster client- normal commands use pooled connections
- Pub/Sub and transactions use dedicated connections
- cluster transactions are intentionally unsupported
This keeps the API honest and avoids hiding Redis connection semantics behind an unsafe abstraction.