stash 3.0.0-dev.1 copy "stash: ^3.0.0-dev.1" to clipboard
stash: ^3.0.0-dev.1 copied to clipboard

outdated

Standard caching API for Dart. Defines a common mechanism to create, access, update, and remove information from caches. Provides a in memory reference implementation

stash #

Build Status Pub Package Coverage Status Package Documentation Github Stars GitHub License


Overview #

The stash caching library was designed from ground up with extensibility in mind. It's based on a small core that can be plugged to a vast array of storage mechanisms and supports extension points across all code base. From a feature perspective it supports the most traditional capabilities found on well know caching libraries like expiration or eviction. The API itself was heavily influenced by the JCache spec from the Java world, but draws inspiration from other libraries as well

3rd party library developers support is well defined as well. The library provides a harness with a number of tests that allow the implementers of novel storage and cache frontends to test their implementations against the same baseline tests that were used by the author to validate every extension.

Features #

  • โฐ Expiry policies - out-of-box support for Eternal (default), Created, Accessed, Modified and Touched policies
  • ๐Ÿ“ค Eviction policies - out-of-box support for FIFO (first-in, first-out), FILO (first-in, last-out), LRU (least-recently used), MRU (most-recently used), LFU (least-frequently used, the default) and MFU (most frequently used)
  • ๐ŸŽฒ Cache entry sampling - Sampling of eviction candidates. Samples the whole set by default using the Full sampler but also supports a Random sampling strategy
  • ๐Ÿ“‡ Partial cache updates - If permitted by the storage implementation it supports the update of only the cache entry header when recording the statistic fields (hit count, access time, expiry time and so on) without replacing the whole value.
  • ๐Ÿš€ Built in binary serialization - Provides a out-of-box highly performing binary serialization using msgpack and inspired on the msgpack_dart package and adapted to the specific needs of this library
  • โžฟ Extensible - Pluggable implementation of custom encoding/decoding, storage, expiry, eviction and sampling strategies.
  • ๐Ÿ”ง Testable - Storage and cache harness for 3d party support of novel storage and cache frontend strategies
  • ๐Ÿ” Tiered cache - Allows the configuration of a primary highly performing cache (in-memory for example) and a secondary second-level cache

Storage Implementations #

There's a vast array of storage implementations available which you can use.

Package Pub Description
stash_memory Pub A memory storage implementation
stash_file Pub A file storage implementation
stash_sqlite Pub A sqlite storage implementation using the moor package
stash_hive Pub A hive storage implementation using the hive package
stash_sembast Pub A sembast storage implementation using the sembast package
stash_objectbox Pub A objectbox storage implementation using the objectbox package

Library Integrations #

There's also some integrations with well know dart libraries

Package Pub Description
stash_dio Pub Integrates with the Dio HTTP client

Getting Started #

Select one of the storage implementation libraries and add the package to your pubspec.yaml replacing x.x.x with the latest version of the storage implementation. The example below uses the stash_memory package which provides an in-memory implementation.:

dependencies:
    stash_memory: ^x.x.x

Run the following command to install dependencies:

dart pub get

Finally, to start developing import the corresponding implementation. In the example bellow the in-memory storage provider which you can start using if you import the stash_memory library:

import 'package:stash/stash_memory.dart';
// In a more general sense 'package:stash/stash_xxx.dart' where xxx is the name of the
// storage provider, memory, hive and so on

Usage #

Simple usage #

Create a Cache using the appropriate storage mechanism. For example on the in-memory implementation you will be using the newMemoryCache function specifying the cache name. Note that if a name is not provided a uuid is automatically assigned as the name:

  // Creates a memory cache with unlimited capacity
  final cache = newMemoryCache();
  // In a more general sense 'newXXXCache' where xxx is the name of the storage provider, 
  // memory, file, sqlite, hive and so on

or alternatively specify a max capacity, 10 for example. Note that the eviction policy is only applied if maxEntries is specified

  // Creates a memory cache with a max capacity of 10
  final cache = newMemoryCache(maxEntries: 10);
  // In a more general sense 'newXXXCache' where xxx is the name of the storage provider, 
  // memory, file, sqlite, hive and so on

Then add a element to the cache:

  // Adds a 'value1' under 'key1' to the cache
  await cache.put('key1', 'value1');

Finally, retrieve that element:

  // Retrieves the value from the cache
  final value = await cache.get('key1');

The in-memory example is the simplest one. Note that on that case there is no persistence so encoding/decoding of elements is not needed. Conversely when the storage mechanism uses persistence and we need to add custom objects they need to be json serializable and the appropriate configuration provided to allow the serialization/deserialization of those objects. This means that all the other implementations of storage mechanisms that stash currently supports need that additional configuration.

Find bellow and example that uses stash_file as the storage implementation of the cache. In this case an object is stored, so in order to deserialize it the user needs to provide a way to decode it, like so: fromEncodable: (json) => Task.fromJson(json). The lambda should make a call to a user provided function that deserializes the object. Conversly, the serialization happens by convention i.e. by calling the toJson method on the object. Note that this example is sufficiently simple to warrant the usage of manual coded functions to serialize/deserialize the objects but could be paired with the json_serializable package or similar for the automatic generation of the Json serialization / deserialization code.

  import 'dart:io';
  import 'package:stash_file/stash_file.dart';

  class Task {
    final int id;
    final String title;
    final bool completed;

    Task({this.id, this.title, this.completed = false});

    /// Creates a [Task] from json map
    factory Task.fromJson(Map<String, dynamic> json) => Task(
        id: json['id'] as int,
        title: json['title'] as String,
        completed: json['completed'] as bool);

    /// Creates a json map from a [Task]
    Map<String, dynamic> toJson() =>
        <String, dynamic>{'id': id, 'title': title, 'completed': completed};

    @override
    String toString() {
      return 'Task ${id}: "${title}" is ${completed ? "completed" : "not completed"}';
    }
  }

  void main() async {
    // Temporary path
    final path = Directory.systemTemp.path;

    // Creates a file based cache with a capacity of 10
    // Since the name was not provided a uuid based name is assigned to the cache
    final cache = newDiskCache(path,
        maxEntries: 10, fromEncodable: (json) => Task.fromJson(json));

    // Adds a task with key 'task1' to the cache
    await cache.put(
        'task1', Task(id: 1, title: 'Run stash example', completed: true));
    // Retrieves the task from the cache
    final value = await cache.get('task1');

    print(value);
  }

Cache Types #

To create a Cache we can use the function exported by the storage library, newMemoryCache in case of the base stashlibrary (generically newXXXCache where xxx is the name of the storage provider).

Note that this is not the only type of cache provided, there's another, the tiered cache which can be created with a call to newTieredCache. It allows the creation of cache that uses primary and secondary cache surrogates. The idea is to have a fast in-memory cache as the primary and a persistent cache as the secondary. In this cases it's normal to have a bigger capacity for the secondary and a lower capacity for the primary. In the example bellow a new tiered cache is created using two in-memory caches the first with a maximum capacity of 10 and the second with unlimited capacity.

  /// Creates a tiered cache with both the primary and the secondary caches using 
  /// a memory based storage. The first cache with a maximum capacity of 10 and 
  /// the second with unlimited capacity
  final cache = newTieredCache(
      newMemoryCache(maxEntries: 10),
      newMemoryCache());

A more common use case is to have the primary cache using a memory storage and the secondary a cache backed by a persistent storage like the one provided by stash_file or stash_sqlite. The example bellow illustrates one of those use cases with the stash_file package as the provider of the storage backend of the secondary cache.

  final cache = newTieredCache(
      newMemoryCache(maxEntries: 10),
      newFileCache(cacheName: 'diskCache', maxEntries: 1000));

Cache Operations #

The Cache frontend provides a number of other operations besides the ones mentioned in the previous sections. The table bellow gives a general overview of those operations.

Operation Description
size Returns the number of entries on the cache
keys Returns all the cache keys
containsKey Checks if the cache contains an entry for the specified key
get Gets the cache value for the specified key
put Adds / Replace the cache value of the specified key
putIfAbsent Replaces the specified key with the provided value if not already set
clear Clears the contents of the cache
remove Removes the specified key value
getAndPut Returns the specified key cache value and replaces it with value
getAndRemove Gets the specified key cache value and removes it

Expiry policies #

It's possible to define how the expiration of cache entries works based on creation, access and modification operations. A number of pre-defined expiry polices are provided out-of-box that define multiple combinations of those interactions. Note that, most of the expiry policies can be configured with a specific duration which is used to increase the expiry time when some type of operation is executed on the cache. This mechanism was heavily inspired on the JCache expiry semantics. By default the configuration does not enforce any kind of expiration, thus it uses theEternal expiry policy. It is of course possible to configure an alternative expiry policy setting the expiryPolicy parameter e.g. newMemoryCache(expiryPolicy: const AccessedExpiryPolicy(Duration(days: 1))). Another alternative is to configure a custom expiry policy through the implementation of ExpiryPolicy interface.

Policy Description
EternalExpiryPolicy The cache does not expire regardless of the operations executed by the user
CreatedExpiryPolicy Whenever the cache is created the configured duration is appended to the current time. No other operations reset the expiry time
AccessedExpiryPolicy Whenever the cache is created or accessed the configured duration is appended to the current time.
ModifiedExpiryPolicy Whenever the cache is created or updated the configured duration is appended to the current time.
TouchedExpiryPolicy Whenever the cache is created, accessed or updated the configured duration is appended to the current time

When the cache expires it's possible to automate the fetching of a new value from the system of records, through the cacheLoader parameter. The user can provide a CacheLoader function that should retrieve a new value for the specified key e.g. newMemoryCache(cacheLoader: (key) => ...). Note that this function must return a Future.

Eviction policies #

As already discussed stash supports eviction as well and provides a number of pre-defined eviction policies that are described in the table bellow. Note that it's mandatory to configure the cache with a number for maxEntries e.g. newMemoryCache(maxEntries: 10). Without this configuration the eviction algorithm is not triggered since there is no limit defined for the number of items on the cache. The default algorithm is LRU (least-recently used) but other algorithms can be configured through the use of the evictionPolicy parameter e.g. newMemoryCache(evictionPolicy: const LruEvictionPolicy()). Another alternative is to configure a custom eviction policy through the implementation of EvictionPolicy interface.

Policy Description
FifoEvictionPolicy FIFO (first-in, first-out) policy behaves in the same way as a FIFO queue, i.e. it evicts the blocks in the order they were added, without any regard to how often or how many times they were accessed before.
FiloEvictionPolicy FILO (first-in, last-out) policy behaves in the same way as a stack and is the exact opposite of the FIFO queue. The cache evicts the block added most recently first without any regard to how often or how many times it was accessed before.
LruEvictionPolicy LRU (least-recently used) policy discards the least recently used items first.
MruEvictionPolicy MRU (most-recently used) policy discards, in contrast to LRU, the most recently used items first.
LfuEvictionPolicy LFU (least-frequently used) policy counts how often an item is used. Those that are least often used are discarded first. It works very similarly to LRU except that instead of storing the value of how recently a block was accessed, it stores the value of how many times it was accessed.
MfuEvictionPolicy MFU (most-frequently used) policy is the exact opposite of LFU. It counts how often a item is used but it discards those that a most used first.

When the maximum capacity of a cache is exceeded eviction of one or more entries is inevitable. At that point the eviction algorithm works with a set of entries that are defined by the sampling strategy used. In the default configuration the whole set of entries is used which means that the cache statistics will be retrieved from each and every one of the entries. This works fine for modest sized caches but can became a performance burden for bigger caches. On that cases a more efficient sampling strategy should be selected to avoid sampling the whole set of entities from storage. On those cases it's possible to configure the sampling strategy with the sampler parameter e.g. newMemoryCache(sampler: RandomSampler(0.5)) uses a Random sampler to select only half of the entries as candidates for eviction. The configuration of a custom sampler is also possible through the implementation of the Sampler interface.

Sampler Description
FullSampler Returns the whole set, no sampling is performed
RandomSampler Allows the sampling of a random set of entries selected from the whole set through the definition of a sampling factor

Contributing #

The stash library is developed by best effort, in the motto of "Scratch your own itch!", meaning APIs that are meaningful for the author use cases.

If you would like to contribute with other parts of the API, feel free to make a Github pull request as I'm always looking for contributions for:

  • Tests
  • Documentation
  • New APIs

Using the Storage and Cache Harnesses #

The stash library provides a way to easily import the set of standard tests that are used for the reference implementations of CacheStore and the reference implementation of Cache allowing to reuse them to test custom implementations provided by external parties. It also provides as number of classes that allow the generation of values which can be used in each one of the tests:

  • BoolGenerator
  • IntGenerator
  • DoubleGenerator
  • StringGenerator
  • IteratorGenerator
  • MapGenerator
  • SampleClassGenerator

Find bellow an example implementation to test a CustomStore

// Import the stash harness
import 'package:stash/stash_harness.dart';
// Import your custom extension here
import 'package:stash_custom/stash_custom.dart';
// Import the test package
import 'package:test/test.dart';

// Primitive test context, to be used for the primitive tests
class DefaultContext extends TestContext<CustomStore> {
  DefaultContext(ValueGenerator generator,
      {dynamic Function(Map<String, dynamic>) fromEncodable})
      : super(generator, fromEncodable: fromEncodable);


  // Provide a implementation of the function that creates a store, 
  @override
  Future<CustomStore> newStore() {
    ...
  }

  // Optionally provide a implementation of the function that creates a custom cache 
  DefaultCache newCache(T store,
      {String name,
      ExpiryPolicy expiryPolicy,
      KeySampler sampler,
      EvictionPolicy evictionPolicy,
      int maxEntries,
      CacheLoader cacheLoader,
      Clock clock}) {
      ...
  }

  // Plug test `expect` method with the `check`used in the tests
  // This is needed to avoid having `test` package dependencies 
  // on the base `stash` library
  @override
  void check(actual, matcher, {String reason, skip}) {
    expect(actual, matcher, reason: reason, skip: skip);
  }

  // Optionally provide a implementation of the function that deletes a store. 
  Future<void> deleteStore(CustomStore store) {
    ...
  }
}

// Object test context, to be used for the class tests
// This example uses the provided SampleClass
class ObjectContext extends DefaultContext {
  ObjectContext(ValueGenerator generator)
      : super(generator,
            fromEncodable: (Map<String, dynamic> json) =>
                SampleClass.fromJson(json));
}

void main() async {
  ...
  // Test the `int` primitive with the provided `DefaultContext`
  test('Int', () async {
    // Run all the tests for a store
    await testStoreWith<CustomStore>(DefaultContext(IntGenerator()));
    // Run all the test for a cache
    await testCacheWith<CustomStore>(DefaultContext(IntGenerator()));
  });
  ...
  // Test the `SampleClass` with a รฌnt` field
  test('Class<int>', () async {
    // Run all the tests for a store
    await testStoreWith<CustomStore>(
        ObjectContext(SampleClassGenerator(IntGenerator())));
    // Run all the test for a cache
    await testCacheWith<CustomStore>(
        ObjectContext(SampleClassGenerator(IntGenerator())));
  });
  ...

Please take a look at the examples provided on one of the storage implementations, for example stash_file or stash_sqlite.

Contributing #

Contributions are always welcome!

If you would like to contribute with other parts of the API, feel free to make a Github pull request as I'm always looking for contributions for:

  • Tests
  • Documentation
  • New APIs

See CONTRIBUTING.md for ways to get started.

Features and Bugs #

Please file feature requests and bugs at the issue tracker.

License #

This project is licensed under the MIT License - see the LICENSE file for details

137
likes
0
pub points
88%
popularity

Publisher

verified publisherivoleitao.dev

Standard caching API for Dart. Defines a common mechanism to create, access, update, and remove information from caches. Provides a in memory reference implementation

Homepage
Repository (GitHub)
View/report issues

License

unknown (LICENSE)

Dependencies

clock, equatable, time, uuid

More

Packages that depend on stash