getBatched<_Model extends _RepositoryModel> method
Get all results in series of batchSize
s (defaults to 50
).
Useful for large queries or remote results.
batchSize
will map to the query
's limit
, and the query
's pagination number will be
incremented in query.providerArgs['offset']
. The endpoint for _Model
should expect these
arguments. The stream will recurse until the return size does not equal batchSize
.
requireRemote
ensures the data is fresh at the expense of increased execution time.
Defaults to false
.
seedOnly
does not load data from SQLite after inserting records. Association queries
can be expensive for large datasets, making deserialization a significant hit when the result
is ignorable (e.g. eager loading). Defaults to false
.
Implementation
Future<List<_Model>> getBatched<_Model extends _RepositoryModel>({
Query? query,
int batchSize = 50,
bool requireRemote = false,
bool seedOnly = false,
}) async {
query = query ?? Query();
final queryWithLimit = query.copyWith(
providerArgs: {...query.providerArgs, 'limit': batchSize},
);
final total = <_Model>[];
/// Retrieve up to [batchSize] starting at [offset]. Recursively retrieves the next
/// [batchSize] until no more results are retrieved.
Future<List<_Model>> getFrom(int offset) async {
// add offset to the existing query
final recursiveQuery = queryWithLimit.copyWith(
providerArgs: {...queryWithLimit.providerArgs, 'offset': offset},
);
final results = await get<_Model>(
query: recursiveQuery,
requireRemote: requireRemote,
seedOnly: seedOnly,
);
total.addAll(results);
// if results match the batchSize, increase offset and get again
if (results.length == batchSize) {
return await getFrom(offset + batchSize);
}
return total;
}
return await getFrom(0);
}