ncnn is a high-performance neural network inference computing framework optimized for mobile platforms. ncnn is deeply considerate about deployment and uses on mobile phones from the beginning of design. ncnn does not have third party dependencies. It is cross-platform, and runs faster than all known open source frameworks on mobile phone cpu. Developers can easily deploy deep learning algorithm models to the mobile platform by using efficient ncnn implementation, create intelligent APPs, and bring the artificial intelligence to your fingertips. ncnn is currently being used in many Tencent applications, such as QQ, Qzone, WeChat, Pitu and so on.
ncnn 鏄竴涓负鎵嬫満绔瀬鑷翠紭鍖栫殑楂樻�ц兘绁炵粡缃戠粶鍓嶅悜璁$畻妗嗘灦銆? ncnn 浠庤璁′箣鍒濇繁鍒昏�冭檻鎵嬫満绔殑閮ㄧ讲鍜屼娇鐢ㄣ�? 鏃犵涓夋柟渚濊禆锛岃法骞冲彴锛屾墜鏈虹 cpu 鐨勯�熷害蹇簬鐩墠鎵�鏈夊凡鐭ョ殑寮�婧愭鏋躲�? 鍩轰簬 ncnn锛屽紑鍙戣�呰兘澶熷皢娣卞害瀛︿範绠楁硶杞绘澗绉绘鍒版墜鏈虹楂樻晥鎵ц锛? 寮�鍙戝嚭浜哄伐鏅鸿兘 APP锛屽皢 AI 甯﹀埌浣犵殑鎸囧皷銆? ncnn 鐩墠宸插湪鑵捐澶氭搴旂敤涓娇鐢紝濡傦細QQ锛孮zone锛屽井淇★紝澶╁ぉ P 鍥剧瓑銆?
Telegram Group https://t.me/ncnnyes
Discord Channel https://discord.gg/YRsxgmF
- Classical CNN: VGG AlexNet GoogleNet Inception ...
- Practical CNN: ResNet DenseNet SENet FPN ...
- Light-weight CNN: SqueezeNet MobileNetV1 MobileNetV2/V3 ShuffleNetV1 ShuffleNetV2 MNasNet ...
- Face Detection: MTCNN RetinaFace scrfd ...
- Detection: VGG-SSD MobileNet-SSD SqueezeNet-SSD MobileNetV2-SSDLite MobileNetV3-SSDLite ...
- Detection: Faster-RCNN R-FCN ...
- Detection: YOLOv2 YOLOv3 MobileNet-YOLOv3 YOLOv4 YOLOv5 YOLOv7 YOLOX ...
- Detection: NanoDet
- Segmentation: FCN PSPNet UNet YOLACT ...
- Pose Estimation: SimplePose ...
how to build ncnn library on Linux / Windows / macOS / Raspberry Pi3, Pi4 / POWER / Android / NVIDIA Jetson / iOS / WebAssembly / AllWinner D1 / Loongson 2K1000
- Build for Linux / NVIDIA Jetson / Raspberry Pi3, Pi4 / POWER
- Build for Windows x64 using VS2017
- Build for macOS
- Build for ARM Cortex-A family with cross-compiling
- Build for Hisilicon platform with cross-compiling
- Build for Android
- Build for iOS on macOS with xcode
- Build for WebAssembly
- Build for AllWinner D1
- Build for Loongson 2K1000
- Build for Termux on Android
- Build for QNX
download prebuild binary package for android and ios
use ncnn with alexnet with detailed steps, recommended for beginners :)
ncnn 缁勪欢浣跨敤鎸囧寳 alexnet 闄勫甫璇︾粏姝ラ锛屾柊浜哄己鐑堟帹鑽? :)
use netron for ncnn model visualization
out-of-the-box web model conversion
ncnn param and model file spec
ncnn operation param weight table
how to implement custom layer step by step
- Supports convolutional neural networks, supports multiple input and multi-branch structure, can calculate part of the branch
- No third-party library dependencies, does not rely on BLAS / NNPACK or any other computing framework
- Pure C++ implementation, cross-platform, supports Android, iOS and so on
- ARM NEON assembly level of careful optimization, calculation speed is extremely high
- Sophisticated memory management and data structure design, very low memory footprint
- Supports multi-core parallel computing acceleration, ARM big.LITTLE CPU scheduling optimization
- Supports GPU acceleration via the next-generation low-overhead Vulkan API
- Extensible model design, supports 8bit quantization and half-precision floating point storage, can import caffe/pytorch/mxnet/onnx/darknet/keras/tensorflow(mlir) models
- Support direct memory zero copy reference load network model
- Can be registered with custom layer implementation and extended
- Well, it is strong, not afraid of being stuffed with 鍗? QvQ
- 鏀寔鍗风Н绁炵粡缃戠粶锛屾敮鎸佸杈撳叆鍜屽鍒嗘敮缁撴瀯锛屽彲璁$畻閮ㄥ垎鍒嗘敮
- 鏃犱换浣曠涓夋柟搴撲緷璧栵紝涓嶄緷璧? BLAS/NNPACK 绛夎绠楁鏋?
- 绾? C++ 瀹炵幇锛岃法骞冲彴锛屾敮鎸? Android / iOS 绛?
- ARM Neon 姹囩紪绾ц壇蹇冧紭鍖栵紝璁$畻閫熷害鏋佸揩
- 绮剧粏鐨勫唴瀛樼鐞嗗拰鏁版嵁缁撴瀯璁捐锛屽唴瀛樺崰鐢ㄦ瀬浣?
- 鏀寔澶氭牳骞惰璁$畻鍔犻�燂紝ARM big.LITTLE CPU 璋冨害浼樺寲
- 鏀寔鍩轰簬鍏ㄦ柊浣庢秷鑰楃殑 Vulkan API GPU 鍔犻�?
- 鍙墿灞曠殑妯″瀷璁捐锛屾敮鎸? 8bit 閲忓寲 鍜屽崐绮惧害娴偣瀛樺偍锛屽彲瀵煎叆 caffe/pytorch/mxnet/onnx/darknet/keras/tensorflow(mlir) 妯″瀷
- 鏀寔鐩存帴鍐呭瓨闆舵嫹璐濆紩鐢ㄥ姞杞界綉缁滄ā鍨?
- 鍙敞鍐岃嚜瀹氫箟灞傚疄鐜板苟鎵╁睍
- 鎭╋紝寰堝己灏辨槸浜嗭紝涓嶆�曡濉炲嵎 QvQ
- 鉁? = known work and runs fast with good optimization
- 鉁旓笍 = known work, but speed may not be fast enough
- 鉂? = shall work, not confirmed
- / = not applied
Windows | Linux | Android | macOS | iOS | |
---|---|---|---|---|---|
intel-cpu | 鉁旓笍 | 鉁旓笍 | 鉂? | 鉁旓笍 | / |
intel-gpu | 鉁旓笍 | 鉁旓笍 | 鉂? | 鉂? | / |
amd-cpu | 鉁旓笍 | 鉁旓笍 | 鉂? | 鉁旓笍 | / |
amd-gpu | 鉁旓笍 | 鉁旓笍 | 鉂? | 鉂? | / |
nvidia-gpu | 鉁旓笍 | 鉁旓笍 | 鉂? | 鉂? | / |
qcom-cpu | 鉂? | 鉁旓笍 | 鉁? | / | / |
qcom-gpu | 鉂? | 鉁旓笍 | 鉁旓笍 | / | / |
arm-cpu | 鉂? | 鉂? | 鉁? | / | / |
arm-gpu | 鉂? | 鉂? | 鉁旓笍 | / | / |
apple-cpu | / | / | / | 鉁旓笍 | 鉁? |
apple-gpu | / | / | / | 鉁旓笍 | 鉁旓笍 |
ibm-cpu | / | 鉁旓笍 | / | / | / |
- https://github.com/nihui/ncnn-android-squeezenet
- https://github.com/nihui/ncnn-android-styletransfer
- https://github.com/nihui/ncnn-android-mobilenetssd
- https://github.com/moli232777144/mtcnn_ncnn
- https://github.com/nihui/ncnn-android-yolov5
- https://github.com/xiang-wuu/ncnn-android-yolov7
- https://github.com/nihui/ncnn-android-scrfd 馃ぉ
- https://github.com/shaoshengsong/qt_android_ncnn_lib_encrypt_example
-
https://github.com/mizu-bai/ncnn-fortran Call ncnn from Fortran
-
https://github.com/k2-fsa/sherpa Use ncnn for real-time speech recognition (i.e., speech-to-text); also support embedded devices and provide mobile Apps (e.g., Android App)