-
Notifications
You must be signed in to change notification settings - Fork 10.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP]backend: Integrating QNN (Qualcomm AI Engine Direct) as a dedicated backend for Qualcomm NPUs #12063
base: master
Are you sure you want to change the base?
Conversation
…ously and thread safe
…ing to review comments
* move qnn_instance function implementation into cpp * wip * wip * move dl related function into separated file * use cast op for gpu * Revert "use cast op for gpu" This reverts commit 05df736. * Reapply "use cast op for gpu" This reverts commit 2520e59. * fix compiling error in win * fix align_alloc in win * fix compiling error * add get sys free/total mem for win * wip * suppress warning in win * add missing chrono header * set the correct qnn lib name for windows * add flag to control cpu backend * wip * wip * Revert "Reapply "use cast op for gpu"" This reverts commit f56519c. * fix compiling error for linux build * fix cdsprpc dynamic library name * wip * skip rpc load fail * fix page_align_alloc * suppress some warning in gcc * wip * reuse align to function * more log * add log and fix warning * wip * fix asan errors and memory leaks * fix the get_io_tensors_from_graph * improve comment * print GGML_QNN_DEFAULT_LIB_SEARCH_PATH * revert some unused changes * move library search path setter into qnn module * fix android library loading * skip qnn_device_get_platform_info for npu emulator
[the original version on 02/26/2025]I don't know this Chinese programmer and I'm not a member of his team.thanks. [the updated version on 03/01/2025]this CN programmer chraacc forcefully added me in this PR's loop, so I have to make the following statement:
|
Yeah, just to clarify, @zhouwg is not affiliated with us, but we appreciate his support! Anyone interested in discussing QNN-related topics is very welcome to join the conversation. |
ggml/src/ggml-qnn/graph.cpp
Outdated
} | ||
|
||
bool qnn_graph::build_graph_from_ggml_graph(const ggml_cgraph *cgraph) { | ||
QNN_LOG_DEBUG("[%s][%s]build start", get_backend_name(_device), _graph_name.c_str()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here's how we map ggml_cgraph
into a qnn graph
ggml/src/ggml-qnn/dl_loader.hpp
Outdated
return reinterpret_cast<Fn>(dl_sym(handle, function_name)); | ||
} | ||
|
||
} // namespace qnn |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO: this dl_loader
can be remove if upstream provide a unified dynamic load machanism
llama.cpp/ggml/src/ggml-backend-reg.cpp
Line 99 in 34a846b
static dl_handle * dl_load_library(const std::wstring & path) { |
I'd like to rephrase my previous statement. I appreciate your earlier work, as my fork is based on your initial PR |
} | ||
|
||
if (_rpc_buffer) { | ||
memcpy(_rpc_buffer->get_buffer(), _buffer->get_buffer(), _buffer->get_size()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great effort! According to QNN Shared Memory Doc, the the _rpc_buffer in HTP can be directly accessed by CPU. Maybe there can be a no copy implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, thank you for the reminder! current the rpc buffer is disabled:
bool should_use_mem_handle() const {
// TODO: figure out how to set rpc mem to multiple tensor
return false;
}
thought we can reuse the rpc buffer for backing ggml tensor in the future, but now its disable by default
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
have an item in my project backlog here: https://github.com/users/chraac/projects/2/views/3?pane=issue&itemId=86454650
ggml/src/ggml-qnn/op-config-impl.cpp
Outdated
return true; | ||
} | ||
|
||
bool ggml_qnn_matmul_op_config::create_mat_mul_nodes(QNNBackend device, Qnn_GraphHandle_t graph_handle, const int rank, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here's how we create corresponding mat_mul
op, and the op will looks like:
which following ggml's guide line:
https://github.com/ggml-org/llama.cpp/blob/master/CONTRIBUTING.md
ggml/src/ggml-qnn/backend-ops.cpp
Outdated
output += ')'; | ||
} | ||
|
||
void get_graph_key_from_cgraph(const ggml_cgraph *cgraph, std::string &output) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generates a unique key for a given ggml_cgraph
. The key is constructed by concatenating the descriptions of the operations and their associated tensor dimensions within the graph.
Example key format: MUL_MATf32_256x16x10f32_256x1x10f32#LOG#ADD#ADDf32_16x1x10f32
May need some refactoring here to handle more complex graph structures and edge cases
* fix warning * wip * add todo for graph key generate * rename some file to meet upstream guideline * remove local .clang-format * expend supported/unsupported counter to all ops * append device name to log * port to ggml logger * fix warning after adapt to ggml logger * append \n to all log * use case op instead of convert * Revert "use case op instead of convert" This reverts commit e662fc2. * fix op that needs same shape * opt kQnnOpsTable * refresh params name field when getting op config * opt npu log print * remove unused functions
* debug * disable reshape * make sure single node op have same type * fix warning at the logger * Revert "disable reshape" This reverts commit 5aeca4b.
* print build type * wip * print compiling flags * wip * wip
Notice you've edited your original post with additional information. I'd like to clarify that my intent was to address specific technical issues that have existed throughout your PR series. without implementing correct matrix transposition, the And to reiterate: please focus on improving your codebase in an objective manner without making assumptions about or judging others' work. If you have any thoughts on my source code implementation, would be very welcome! I'm open to discussion about the design, implementation details, or any other technical aspects of the code. Collaborative feedback helps us all build better software. By sharing insights about implementation approaches, performance considerations, and edge cases, we collectively create more reliable and efficient code than any individual contributor could achieve independently. (Not gonna lie - it can be tough sometimes, but I'm all about keeping an open mind and hearing different viewpoints. Just trying my best here!) |
such these similar comments happened in my first PR and second PR and third PR again and again, this is a typical Chinese PUA strategy: make a false information with other's PR, angered other PR's author, achieved your purpose. btw, everyone in this community can see what happened in my first & second & third PR and what this CN programmer did in my first&second&third PR. I personally think such this behavior is a huge hurt to this pure tech community even I admit this CN programmer has good tech skillsets.
such these similar beautiful comments or xxx style propaganda(very grand and beautiful words, but the behavior is exactly the opposite) can be seen from CN's media in western world or in my first PR and third PR: beautiful and grand words, but action...... I already blocked in this community before 02/16/2025 because of my stupid mistake last year which part of reasons came from this CN programmer in my first PR and which the main reason is my personal mistake, this CN programmer has already intended to use the maintainers 's hands to block me again in my third PR so his voice and misinformation can be seen by everyone in this tech community. |
QNN_LOG_DEBUG("[%s][%s]op was unsupported, support/unsupported: %d/%d\n", qnn::get_backend_name(ctx->device), | ||
ggml_op_name(op->op), ctx->supported_op_count.load(), ctx->unsupported_op_count.load()); | ||
} | ||
#endif |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In our recent PR, we added a counter to track which operations are successfully offloaded to the qnn backend. While testing with the llama-3-8B-Instruct-Q4_K_M
model, found an interesting result:
Current Status
- Even though quantized tensor support isn't implemented yet, many operations are still being processed by the qnn backend since they operate on
F32
data - As shown in the screenshot, we're seeing significant operation offloading opportunities
- However, no
MUL_MAT
op are currently being offloaded to qnn, which are critical for performance
Next Steps
Based on this analysis, I'm shifting focus a bit to implement support for additional operation types that can be offloaded from cpu to qnn - this will provide immediate performance benefits while running models on device...
Simultaneously, will continue investigating how to port GGML's quantization scheme to QNN - this remains a core objective for our long-term performance goals, especially for quantized models like the one shown in the testing.
Test method and Resources
- Push llm model to android device folder
/data/local/tmp
- Run
scripts/run_device_model.sh --verbose --model-name 'meta-llama_Meta-Llama-3-8B-Instruct-Q4_K_M.gguf'
,run_device_model.sh
can be found here
Full running log:
run_model.8b.q4.debug.log
Let's see what @slaren said in you PR:
I'm focused on improving the QNN backend support and welcome technical discussions on this topic. As the maintainer noted, provoking personal conflict isn't encouraged. Comments that stray from technical feedback will not receive a response from now on. |
Warning: This is an early draft of my fork and will continue to be updated to meet the requirements in the contributing guidelines
Summary
This fork is based on zhouwg's initial PR and performs further refactoring and improvements to introduce support for the Qualcomm QNN backend to GGML.
This backend is organized into three distinct integration layers:
GGML Adaptation Layer
Graph Caching, Mapping, and Execution:
backend-ops.cpp
) to minimize redundant graph creation and boost execution performance.op-config-caps.cpp
andop-config-impl.cpp
).Tensor Binding and Execution Flow:
tensor.hpp
andgraph.hpp
), managing both host and RPC memory via buffer interfaces likeqnn_buffer_interface
.QNN Object Layer
QNN System and Instance Management:
qnn_system_interface
class, originally derived from executorch, to create and free the QNN system context.qnn_instance
classload_backend()
andload_system()
) that retrieve provider lists and choose valid QNN interfaces based on API version checks.Dynamic Resource Handling:
load_lib_with_fallback()
to reliably load both the system and RPC libraries.Utility Layer
Dynamic Library Loading & Search Path Management:
qnn-lib.cpp
to manage dynamic library loading with fallbacks.insert_path()
andset_qnn_lib_search_path()
to configure environment variables (likeLD_LIBRARY_PATH
on Linux andADSP_LIBRARY_PATH
on Android) based on a custom library search path.General Utilities:
Key Features and Improvements
Graph Mapping Mechanism:
Backend Context and Device Management:
Build
For build instructions please refer to this page
Testing
Basic functionality of the QNN backend has been verified on Android, Linux, and Windows platforms using
test-backend-ops
—this is integrated into the pipeline for each commit node of thedev-refactoring
branch.Proper graph creation and execution paths are confirmed through detailed log messages.
Memory registration and cleanup within tensor binding functions have been thoroughly checked.
Table below shows GIFs of qnn backend running on different platforms
Current state
Future development