search method

Future<RagSearchResult> search(
  1. String query, {
  2. int topK = 10,
  3. int tokenBudget = 2000,
  4. ContextStrategy strategy = ContextStrategy.relevanceFirst,
  5. int adjacentChunks = 0,
  6. bool singleSourceMode = false,
  7. List<int>? sourceIds,
  8. String? collectionId,
})

Search for relevant chunks and assemble context for LLM.

query - The search query text. topK - Number of top results to return (default: 10). tokenBudget - Maximum tokens for assembled context (default: 2000). strategy - Context assembly strategy (default: relevanceFirst). adjacentChunks - Include N chunks before/after matches (default: 0). singleSourceMode - Only include chunks from most relevant source.

Implementation

Future<RagSearchResult> search(
  String query, {
  int topK = 10,
  int tokenBudget = 2000,
  ContextStrategy strategy = ContextStrategy.relevanceFirst,
  int adjacentChunks = 0,
  bool singleSourceMode = false,
  List<int>? sourceIds,
  String? collectionId,
}) async {
  final service = await _serviceForCollection(collectionId);
  await _flushIndex(
    collectionId: collectionId,
  ); // Ensure index is up-to-date before searching
  return service.search(
    query,
    topK: topK,
    tokenBudget: tokenBudget,
    strategy: strategy,
    adjacentChunks: adjacentChunks,
    singleSourceMode: singleSourceMode,
    sourceIds: sourceIds,
  );
}