llama_sample_token_mirostat_v2 method

int llama_sample_token_mirostat_v2(
  1. Pointer<llama_context> ctx,
  2. Pointer<llama_token_data_array> candidates,
  3. double tau,
  4. double eta,
  5. Pointer<Float> mu,
)

@details Mirostat 2.0 algorithm described in the paper https://arxiv.org/abs/2007.14966. Uses tokens instead of words. @param candidates A vector of llama_token_data containing the candidate tokens, their probabilities (p), and log-odds (logit) for the current position in the generated text. @param tau The target cross-entropy (or surprise) value you want to achieve for the generated text. A higher value corresponds to more surprising or less predictable text, while a lower value corresponds to less surprising or more predictable text. @param eta The learning rate used to update mu based on the error between the target and observed surprisal of the sampled word. A larger learning rate will cause mu to be updated more quickly, while a smaller learning rate will result in slower updates. @param mu Maximum cross-entropy. This value is initialized to be twice the target cross-entropy (2 * tau) and is updated in the algorithm based on the error between the target and observed surprisal.

Implementation

int llama_sample_token_mirostat_v2(
  ffi.Pointer<llama_context> ctx,
  ffi.Pointer<llama_token_data_array> candidates,
  double tau,
  double eta,
  ffi.Pointer<ffi.Float> mu,
) {
  return _llama_sample_token_mirostat_v2(
    ctx,
    candidates,
    tau,
    eta,
    mu,
  );
}