Skip to content

Commit

Permalink
llama : extend API to get max devices at runtime (#2253)
Browse files Browse the repository at this point in the history
  • Loading branch information
AsakusaRinne authored Jul 19, 2023
1 parent 45a1b07 commit 294f424
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 0 deletions.
4 changes: 4 additions & 0 deletions llama.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -875,6 +875,10 @@ struct llama_model_quantize_params llama_model_quantize_default_params() {
return result;
}

int llama_max_devices() {
return LLAMA_MAX_DEVICES;
}

bool llama_mmap_supported() {
return llama_mmap::SUPPORTED;
}
Expand Down
2 changes: 2 additions & 0 deletions llama.h
Original file line number Diff line number Diff line change
Expand Up @@ -153,6 +153,8 @@ extern "C" {
int32_t n_eval;
};

LLAMA_API int llama_max_devices();

LLAMA_API struct llama_context_params llama_context_default_params();
LLAMA_API struct llama_model_quantize_params llama_model_quantize_default_params();

Expand Down

0 comments on commit 294f424

Please sign in to comment.