You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
./llama-cli --version
load_backend: failed to find ggml_backend_init in ./libggml-cuda.so
load_backend: failed to find ggml_backend_init in ./libggml-cpu.so
version: 4603 (4a2b196)
built with gcc (GCC) 8.5.0 20210514 (TencentOS 8.5.0-18) for x86_64-redhat-linux
Operating systems
Linux
Which llama.cpp modules do you know to be affected?
In the file ggml-backend-reg.cpp, the code snippet is as follows:
ggml_backend_reg_t load_backend(const std::wstring & path, bool silent) {
dl_handle_ptr handle { dl_load_library(path) };
if (!handle) {
if (!silent) {
GGML_LOG_ERROR("%s: failed to load %s\n", __func__, utf16_to_utf8(path).c_str());
}
return nullptr;
}
auto score_fn = (ggml_backend_score_t) dl_get_sym(handle.get(), "ggml_backend_score");
if (score_fn && score_fn() == 0) {
if (!silent) {
GGML_LOG_INFO("%s: backend %s is not supported on this system\n", __func__, utf16_to_utf8(path).c_str());
}
return nullptr;
}
auto backend_init_fn = (ggml_backend_init_t) dl_get_sym(handle.get(), "ggml_backend_init");
if (!backend_init_fn) {
if (!silent) {
GGML_LOG_ERROR("%s: failed to find ggml_backend_init in %s\n", __func__, utf16_to_utf8(path).c_str());
}
return nullptr;
}
ggml_backend_reg_t reg = backend_init_fn();
if (!reg || reg->api_version != GGML_BACKEND_API_VERSION) {
if (!silent) {
if (!reg) {
GGML_LOG_ERROR("%s: failed to initialize backend from %s: ggml_backend_init returned NULL\n", __func__, utf16_to_utf8(path).c_str());
} else {
GGML_LOG_ERROR("%s: failed to initialize backend from %s: incompatible API version (backend: %d, current: %d)\n",
__func__, utf16_to_utf8(path).c_str(), reg->api_version, GGML_BACKEND_API_VERSION);
}
}
return nullptr;
}
GGML_LOG_INFO("%s: loaded %s backend from %s\n", __func__, ggml_backend_reg_name(reg), utf16_to_utf8(path).c_str());
register_backend(reg, std::move(handle));
return reg;
}
I believe that when score_fn is nullptr, backend_init_fn should also be nullptr.
This error should occur when score_fn == nullptr and backend_init_fn is also nullptr.
To address the issue, the condition should be changed to !score_fn || score_fn() == 0.
The text was updated successfully, but these errors were encountered:
Name and Version
./llama-cli --version
load_backend: failed to find ggml_backend_init in ./libggml-cuda.so
load_backend: failed to find ggml_backend_init in ./libggml-cpu.so
version: 4603 (4a2b196)
built with gcc (GCC) 8.5.0 20210514 (TencentOS 8.5.0-18) for x86_64-redhat-linux
Operating systems
Linux
Which llama.cpp modules do you know to be affected?
llama-cli
Command line
Problem description & steps to reproduce
Run the following command:
./llama-cli --n_gpu_layers 29 -m /path/to/qwen2.5-7b-it-Q4_K_M-LOT.gguf
Then, check the log. Everything seems fine except for the error highlighted in bold below:
I believe that when score_fn is nullptr, backend_init_fn should also be nullptr.
This error should occur when score_fn == nullptr and backend_init_fn is also nullptr.
To address the issue, the condition should be changed to !score_fn || score_fn() == 0.
The text was updated successfully, but these errors were encountered: