llama_backend_init method

void llama_backend_init(
  1. bool numa
)

Initialize the llama + ggml backend If numa is true, use NUMA optimizations Call once at the start of the program

Implementation

void llama_backend_init(
  bool numa,
) {
  return _llama_backend_init(
    numa,
  );
}