Build Status Pub Package Coverage Status Package Documentation Github Stars GitHub License


The stash caching library was designed from ground up with extensibility in mind. It's based on a small core that relies on several extension points. Featurewise it supports the most traditional capabilities found on well know caching libraries like expiration or eviction. The API itself was heavily influenced by the JCache spec from the Java world, but draws inspiration from other libraries as well.

3rd party library support was a major concern since it's inception, as such, a library stash_test is provided with a complete set of tests that allow the implementers of novel storage and cache frontends to test their implementations against the same baseline tests that were used by the main library.


  • :alarm_clock: Expiry policies - out-of-box support for Eternal (default), Created, Accessed, Modified and Touched policies.
  • :outbox_tray: Eviction policies - out-of-box support for FIFO (first-in, first-out), FILO (first-in, last-out), LRU (least-recently used), MRU (most-recently used), LFU (least-frequently used, the default) and MFU (most frequently used).
  • :game_die: Cache entry sampling - Sampling of eviction candidates either by sampling the whole set (the default) using the Full sampler or in alternative through a Random sampling strategy.
  • :rocket: Built in binary serialization - Provides a out-of-box highly performant binary serialization using msgpack with an implementation inspired on the msgpack_dart package and adapted to the specific needs of this library .
  • :loop: Extensible - Pluggable implementation of custom encoding/decoding, storage, expiry, eviction and sampling strategies.
  • :wrench: Testable - Storage and cache harness for 3d party support of novel storage and cache frontend strategies
  • :hamburger: Tiered cache - Allows the configuration of a primary highly performing cache (in-memory for example) and a secondary second-level cache
  • :sparkles: Events - Provides a set of subscribable cache events, CreatedEntryEvent, UpdatedEntryEvent, RemovedEntryEvent, ExpiredEntryEvent and EvictedEntryEvent.

Storage Implementations

There's a vast array of storage implementations available which you can use.

stash_memoryPubA memory storage implementation
stash_filePubA file storage implementation using the file package
stash_sqlitePubA sqlite storage implementation using the moor package
stash_hivePubA hive storage implementation using the hive package
stash_sembastPubA sembast storage implementation using the sembast package
stash_sembast_webPubA sembast web storage implementation using the sembast_web package
stash_objectboxPubA objectbox storage implementation using the objectbox package

Library Integrations

There's also some integrations with well know dart libraries

stash_dioPubIntegrates with the Dio HTTP client

Test Support

Finally a testing library is provided to aid in the development of third party extensions

stash_testPubTesting support for stash extensions

Getting Started

Select one of the storage implementation libraries and add the package to your pubspec.yaml replacing x.x.x with the latest version of the storage implementation. The example below uses the stash_memory package which provides an in-memory implementation:

    stash_memory: ^x.x.x

Run the following command to install dependencies:

dart pub get

Finally, to start developing import the corresponding implementation. In the example bellow the in-memory storage provider which you can start using if you import the stash_memory library:

import 'package:stash/stash_memory.dart';
// In a more general sense 'package:stash/stash_xxx.dart' where xxx is the name of the
// storage provider, memory, hive and so on


Simple usage

Create a Cache using the appropriate storage mechanism. For example on the in-memory implementation you will be using the newMemoryCache function exported by the stash_memory package. Note that if the name of the cache is not provided (as in the example bellow) a uuid is automatically assigned as the name:

  // Creates a memory cache with unlimited capacity
  final cache = newMemoryCache();
  // In a more general sense 'newXXXCache' where xxx is the name of the storage provider, 
  // memory, file, sqlite, hive and so on

or alternatively specify a max capacity, 10 for example. Note that the eviction policy is only applied if maxEntries is specified

  // Creates a memory cache with a max capacity of 10
  final cache = newMemoryCache(maxEntries: 10);
  // In a more general sense 'newXXXCache' where xxx is the name of the storage provider, 
  // memory, file, sqlite, hive and so on

Then add a element to the cache:

  // Adds a 'value1' under 'key1' to the cache
  await cache.put('key1', 'value1');

Finally, retrieve that element:

  // Retrieves the value from the cache
  final value = await cache.get('key1');

The in-memory example is the simplest one, on this case there is no persistence so encoding/decoding of elements is not needed. Conversely when the storage mechanism uses persistence and we need to add custom objects they need to be json serializable and the appropriate configuration provided to allow the serialization/deserialization of those objects. This means that on those cases additional configuration is needed to allow the serilization/deserialization to happen.

Find bellow and example that uses stash_file as the storage implementation of the cache. In this case an object is stored, so in order to deserialize it the user needs to provide a way to decode it, like so: fromEncodable: (json) => Task.fromJson(json). The lambda should make a call to a user provided function that deserializes the object. Conversly, the serialization happens by convention i.e. by calling the toJson method on the object. Note that this example is sufficiently simple to warrant the usage of manual coded functions to serialize/deserialize the objects but could be paired with the json_serializable package or similar for the automatic generation of the Json serialization / deserialization code.

import 'dart:io';

import 'package:stash_file/stash_file.dart';

class Task {
  final int id;
  final String title;
  final bool completed;

  Task({required, required this.title, this.completed = false});

  /// Creates a [Task] from json map
  factory Task.fromJson(Map<String, dynamic> json) => Task(
      id: json['id'] as int,
      title: json['title'] as String,
      completed: json['completed'] as bool);

  /// Creates a json map from a [Task]
  Map<String, dynamic> toJson() =>
      <String, dynamic>{'id': id, 'title': title, 'completed': completed};

  String toString() {
    return 'Task $id: "$title" is ${completed ? "completed" : "not completed"}';

void main() async {
  // Temporary path
  final path = Directory.systemTemp.path;

  // Creates a cache on the local storage with the capacity of 10 entries
  final cache = newLocalFileCache(path: path,
      maxEntries: 10, fromEncodable: (json) => Task.fromJson(json));

  // Adds a task with key 'task1' to the cache
  await cache.put(
      'task1', Task(id: 1, title: 'Run stash_file example', completed: true));
  // Retrieves and prints the value from the cache
  print(await cache.get('task1'));

You may want to reuse the same store for multiple caches. In order to do that you will need to create the store first like in example bellow:

  // Creates a store
  final store = newMemoryStore();
  // In a more general sense 'newXXXStore' where xxx is the name of the storage provider, 
  // memory, file, sqlite, hive and so on

Then, it's just a matter of instanciating caches from the store, as presented bellow, where the same memory store is used to create two different caches. This is particulary relevant when using stores like sqlite, hive, file and similar where it's normal to rely in a unique store.

  // Creates a cache from the previously created store with a capacity of 10 and name 'cache1'
  final cache1 = store.cache(
      cacheName: 'cache1',
      maxEntries: 10);

  // Creates a second cache from the previously created store with a capacity of 10 and name 'cache2'
  final cache1 = store.cache(
      cacheName: 'cache2',
      maxEntries: 10);

Cache Types

To create a Cache we can use the function exported by a specific storage library, newMemoryCache in case of the stash_memory library (generically newXXXCache where xxx is the name of the storage provider).

Note that this is not the only type of cache provided, stash also provies a tiered cache which can be created with a call to newTieredCache function which is exported by the base stash library. It allows the creation of cache that uses primary and secondary cache surrogates an the idea is to have a fast in-memory cache as the primary and a persistent cache as the secondary altough other combinations definitely supported. In this cases it's normal to have a bigger capacity for the secondary and a lower capacity for the primary cache. In the example bellow a new tiered cache is created using two in-memory caches the first with a maximum capacity of 10 and the second with unlimited capacity.

  /// Creates a tiered cache with both the primary and the secondary caches using 
  /// a memory based storage. The first cache with a maximum capacity of 10 and 
  /// the second with unlimited capacity
  final cache = newTieredCache(
      newMemoryCache(maxEntries: 10),

A more common use case is to have the primary cache using a memory storage and the secondary a cache backed by a persistent storage like the one provided by stash_file or stash_sqlite packages. The example bellow illustrates one of those use cases with the stash_file package as the provider of the storage backend of the secondary cache.

  final cache = newTieredCache(
      newMemoryCache(maxEntries: 10),
      newFileCache(cacheName: 'diskCache', maxEntries: 1000));

Cache Operations

The Cache frontend provides a number of operations which are presented in the table bellow.

sizeReturns the number of entries on the cache
keysReturns all the cache keys
containsKeyChecks if the cache contains an entry for the specified key
getGets the cache value for the specified key
putAdds / Replace the cache value of the specified key
putIfAbsentReplaces the specified key with the provided value if not already set
clearClears the contents of the cache
removeRemoves the specified key value
getAndPutReturns the specified key cache value and replaces it with value
getAndRemoveGets the specified key cache value and removes it

Expiry policies

It's possible to define how the expiration of cache entries works based on creation, access and modification operations. A number of pre-defined expiry polices are provided out-of-box that define multiple combinations of those interactions. Note that, most of the expiry policies can be configured with a specific duration which is used to increase the expiry time when some type of operation is executed on the cache. This mechanism was heavily inspired on the JCache expiry semantics. By default the configuration does not enforce any kind of expiration, thus it uses theEternal expiry policy. It is of course possible to configure an alternative expiry policy setting the expiryPolicy parameter e.g. newMemoryCache(expiryPolicy: const AccessedExpiryPolicy(Duration(days: 1))). Another alternative is to configure a custom expiry policy through the implementation of ExpiryPolicy interface.

EternalExpiryPolicyThe cache does not expire regardless of the operations executed by the user
CreatedExpiryPolicyWhenever the cache is created the configured duration is appended to the current time. No other operations reset the expiry time
AccessedExpiryPolicyWhenever the cache is created or accessed the configured duration is appended to the current time.
ModifiedExpiryPolicyWhenever the cache is created or updated the configured duration is appended to the current time.
TouchedExpiryPolicyWhenever the cache is created, accessed or updated the configured duration is appended to the current time

When the cache expires it's possible to automate the fetching of a new value from the system of records, through the cacheLoader mechanism. The user can provide a CacheLoader function that should retrieve a new value for the specified key e.g. newMemoryCache(cacheLoader: (key) => ...). Note that this function must return a Future.

Eviction policies

As discussed stash supports eviction as well and provides a number of pre-defined eviction policies that are described in the table bellow. Note that it's mandatory to configure the cache with a number for maxEntries e.g. newMemoryCache(maxEntries: 10). Without this configuration the eviction algorithm is not triggered since there is no limit defined for the number of items on the cache. The default algorithm is LRU (least-recently used) but other algorithms can be configured through the use of the evictionPolicy parameter e.g. newMemoryCache(evictionPolicy: const LruEvictionPolicy()). Another alternative is to configure a custom eviction policy through the implementation of EvictionPolicy interface.

FifoEvictionPolicyFIFO (first-in, first-out) policy behaves in the same way as a FIFO queue, i.e. it evicts the entries in the order they were added, without any regard to how often or how many times they were accessed before.
FiloEvictionPolicyFILO (first-in, last-out) policy behaves in the same way as a stack and is the exact opposite of the FIFO queue. The cache evicts the entries added most recently first without any regard to how often or how many times it was accessed before.
LruEvictionPolicyLRU (least-recently used) policy discards the least recently used entries first.
MruEvictionPolicyMRU (most-recently used) policy discards, in contrast to LRU, the most recently used entries first.
LfuEvictionPolicyLFU (least-frequently used) policy counts how often an entry is used. Those that are least often used are discarded first. In that sense it works very similarly to LRU except that instead of storing the value of how recently a block was accessed, it stores the value of how many times it was accessed.
MfuEvictionPolicyMFU (most-frequently used) policy is the exact opposite of LFU. It counts how often a entry is used but it discards those that are most used first.

When the maximum capacity of a cache is exceeded eviction of one or more entries is inevitable. At that point the eviction algorithm works with a set of entries that are defined by the sampling strategy used. In the default configuration the whole set of entries is used which means that the cache statistics will be retrieved from each and every one of the entries. This works fine for modest sized caches but can became a performance burden for bigger caches. On that cases a more efficient sampling strategy should be selected to avoid sampling the whole set of entities from storage. On those cases it's possible to configure the sampling strategy with the sampler parameter e.g. newMemoryCache(sampler: RandomSampler(0.5)) uses a Random sampler to select only half of the entries as candidates for eviction. The configuration of a custom sampler is also possible through the implementation of the Sampler interface.

FullSamplerReturns the whole set, no sampling is performed
RandomSamplerAllows the sampling of a random set of entries selected from the whole set through the definition of a sampling factor


The user of a Cache can subscribe to cache enty events if they are enabled through configuration as, by default, no events are propagated. The three possible configurations are setted via the eventListenerMode parameter. When creating a Cache the default value is Disabled as in, no events are published, but can be configured with Sync providing synchronous events or with Async in which case it provides assynchronous events. A user can subscribe all events or only to a specific set of events at the moment of the subscription. On the example bellow a memory cache is created with a maximum of 10 entries subscribing all the events synchronously.

  // Creates a memory cache with a max capacity of 10 and subscribes to all the 
  // cache events
  final cache = newMemoryCache(maxEntries: 10, eventListenerMode: EventListenerMode.synchronous)
    ..on().listen((event) => print(event));
CreatedEntryEventTriggered when a cache entry is created.
UpdatedEntryEventTriggered when a cache entry is updated.
RemovedEntryEventTriggered when a cache entry is removed.
ExpiredEntryEventTriggered when a cache entry expires.
EvictedEntryEventTriggered when a cache entry is evicted.
  // Creates a memory cache with a max capacity of 10 which subscribes to the Created and 
  // the Updated cache events
  final cache = newMemoryCache(maxEntries: 10, eventListenerMode: EventListenerMode.synchronous)
    ..on<CreatedEntryEvent>().listen((event) => print(event.type))
    ..on<UpdatedEntryEvent>().listen((event) => print(event.type));


Contributions are always welcome!

If you would like to contribute with other parts of the API, feel free to make a Github pull request as I'm always looking for contributions for:

  • Tests
  • Documentation
  • New APIs

See for ways to get started.

Features and Bugs

Please file feature requests and bugs at the issue tracker.


This project is licensed under the MIT License - see the LICENSE file for details


Standard caching API for Dart. Defines a common mechanism to create, access, update, and remove information from caches.
Stash msgpack codec implementation.