searchHybridWithContext method

Future<RagSearchResult> searchHybridWithContext(
  1. String query, {
  2. int topK = 10,
  3. int tokenBudget = 2000,
  4. ContextStrategy strategy = ContextStrategy.relevanceFirst,
  5. double vectorWeight = 0.5,
  6. double bm25Weight = 0.5,
})

Hybrid search with context assembly for LLM.

Similar to search but uses hybrid (vector + BM25) search.

Implementation

Future<RagSearchResult> searchHybridWithContext(
  String query, {
  int topK = 10,
  int tokenBudget = 2000,
  ContextStrategy strategy = ContextStrategy.relevanceFirst,
  double vectorWeight = 0.5,
  double bm25Weight = 0.5,
}) async {
  // 1. Get hybrid search results
  final hybridResults = await searchHybrid(
    query,
    topK: topK,
    vectorWeight: vectorWeight,
    bm25Weight: bm25Weight,
  );

  // 2. Convert to ChunkSearchResult format for context building
  // Note: Hybrid search returns content directly, so we create minimal chunks
  final chunks = hybridResults
      .map(
        (r) => ChunkSearchResult(
          chunkId: r.docId,
          sourceId: r.docId, // Same as chunk ID for simple docs
          content: r.content,
          chunkIndex: 0,
          chunkType: 'general', // Hybrid search doesn't return chunk type
          similarity: r.score, // RRF score as similarity
        ),
      )
      .toList();

  // 3. Assemble context
  final context = ContextBuilder.build(
    searchResults: chunks,
    tokenBudget: tokenBudget,
    strategy: strategy,
  );

  return RagSearchResult(chunks: chunks, context: context);
}