llamadart
library
Typedefs
Dartggml_abort_callbackFunction
= bool Function(Pointer <Void > data )
Dartggml_backend_sched_eval_callbackFunction
= bool Function(Pointer <ggml_tensor > t , bool ask , Pointer <Void > user_data )
Dartggml_log_callbackFunction
= void Function(ggml_log_level level , Pointer <Char > text , Pointer <Void > user_data )
Dartggml_opt_epoch_callbackFunction
= void Function(bool train , ggml_opt_context_t opt_ctx , ggml_opt_dataset_t dataset , ggml_opt_result_t result , int ibatch , int ibatch_max , int t_start_us )
Dartllama_opt_param_filterFunction
= bool Function(Pointer <ggml_tensor > tensor , Pointer <Void > userdata )
Dartllama_pos
= int
Dartllama_progress_callbackFunction
= bool Function(double progress , Pointer <Void > user_data )
Dartllama_seq_id
= int
Dartllama_state_seq_flags
= int
Dartllama_token
= int
ggml_abort_callback
= Pointer <NativeFunction <ggml_abort_callbackFunction > >
Abort callback
If not NULL, called before ggml computation
If it returns true, the computation is aborted
ggml_abort_callbackFunction
= Bool Function(Pointer <Void > data )
ggml_backend_buffer_type_t
= Pointer <ggml_backend_buffer_type >
ggml_backend_dev_t
= Pointer <ggml_backend_device >
ggml_backend_sched_eval_callback
= Pointer <NativeFunction <ggml_backend_sched_eval_callbackFunction > >
Evaluation callback for each node in the graph (set with ggml_backend_sched_set_eval_callback)
when ask == true, the scheduler wants to know if the user wants to observe this node
this allows the scheduler to batch nodes together in order to evaluate them in a single call
ggml_backend_sched_eval_callbackFunction
= Bool Function(Pointer <ggml_tensor > t , Bool ask , Pointer <Void > user_data )
ggml_log_callback
= Pointer <NativeFunction <ggml_log_callbackFunction > >
TODO these functions were sandwiched in the old optimization interface, is there a better place for them?
ggml_log_callbackFunction
= Void Function(UnsignedInt level , Pointer <Char > text , Pointer <Void > user_data )
ggml_opt_context_t
= Pointer <ggml_opt_context >
ggml_opt_dataset_t
= Pointer <ggml_opt_dataset >
ggml_opt_epoch_callback
= Pointer <NativeFunction <ggml_opt_epoch_callbackFunction > >
signature for a callback while evaluating opt_ctx on dataset, called after an evaluation
ggml_opt_epoch_callbackFunction
= Void Function(Bool train , ggml_opt_context_t opt_ctx , ggml_opt_dataset_t dataset , ggml_opt_result_t result , Int64 ibatch , Int64 ibatch_max , Int64 t_start_us )
ggml_opt_get_optimizer_params
= Pointer <NativeFunction <ggml_opt_get_optimizer_paramsFunction > >
callback to calculate optimizer parameters prior to a backward pass
userdata can be used to pass arbitrary data
ggml_opt_get_optimizer_paramsFunction
= ggml_opt_optimizer_params Function(Pointer <Void > userdata )
ggml_opt_result_t
= Pointer <ggml_opt_result >
ggml_threadpool_t
= Pointer <ggml_threadpool >
llama_memory_t
= Pointer <llama_memory_i >
llama_opt_param_filter
= Pointer <NativeFunction <llama_opt_param_filterFunction > >
function that returns whether or not a given tensor contains trainable parameters
llama_opt_param_filterFunction
= Bool Function(Pointer <ggml_tensor > tensor , Pointer <Void > userdata )
llama_pos
= Int32
llama_progress_callback
= Pointer <NativeFunction <llama_progress_callbackFunction > >
llama_progress_callbackFunction
= Bool Function(Float progress , Pointer <Void > user_data )
llama_sampler_context_t
= Pointer <Void >
Sampling API
llama_seq_id
= Int32
llama_state_seq_flags
= Uint32
llama_token
= Int32