Skip to content
forked from Tencent/ncnn

ncnn is a high-performance neural network inference framework optimized for the mobile platform

License

Notifications You must be signed in to change notification settings

nicochen1118/ncnn

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

NCNN

ncnn

License Download Total Count codecov Language grade: C/C++

ncnn is a high-performance neural network inference computing framework optimized for mobile platforms. ncnn is deeply considerate about deployment and uses on mobile phones from the beginning of design. ncnn does not have third party dependencies. It is cross-platform, and runs faster than all known open source frameworks on mobile phone cpu. Developers can easily deploy deep learning algorithm models to the mobile platform by using efficient ncnn implementation, create intelligent APPs, and bring the artificial intelligence to your fingertips. ncnn is currently being used in many Tencent applications, such as QQ, Qzone, WeChat, Pitu and so on.

ncnn 鏄竴涓负鎵嬫満绔瀬鑷翠紭鍖栫殑楂樻�ц兘绁炵粡缃戠粶鍓嶅悜璁$畻妗嗘灦銆? ncnn 浠庤璁′箣鍒濇繁鍒昏�冭檻鎵嬫満绔殑閮ㄧ讲鍜屼娇鐢ㄣ�? 鏃犵涓夋柟渚濊禆锛岃法骞冲彴锛屾墜鏈虹 cpu 鐨勯�熷害蹇簬鐩墠鎵�鏈夊凡鐭ョ殑寮�婧愭鏋躲�? 鍩轰簬 ncnn锛屽紑鍙戣�呰兘澶熷皢娣卞害瀛︿範绠楁硶杞绘澗绉绘鍒版墜鏈虹楂樻晥鎵ц锛? 寮�鍙戝嚭浜哄伐鏅鸿兘 APP锛屽皢 AI 甯﹀埌浣犵殑鎸囧皷銆? ncnn 鐩墠宸插湪鑵捐澶氭搴旂敤涓娇鐢紝濡傦細QQ锛孮zone锛屽井淇★紝澶╁ぉ P 鍥剧瓑銆?


鎶�鏈氦娴? QQ 缇わ細637093648 (瓒呭澶т浆) 绛旀锛氬嵎鍗峰嵎鍗峰嵎 锛堝凡婊★級

Pocky QQ 缇わ紙MLIR YES!锛?: 677104663(瓒呭澶т浆) 绛旀锛歮ulti-level intermediate representation

Telegram Group https://t.me/ncnnyes


Current building status matrix

System CPU (32bit) CPU (64bit) GPU (32bit) GPU (64bit)
Linux (GCC) Build Status Build Status 鈥? Build Status
Linux (Clang) Build Status Build Status 鈥? Build Status
Linux (ARM) Build Status Build Status 鈥? 鈥?
Linux (MIPS) Build Status Build Status 鈥? 鈥?
Linux (RISC-V) 鈥? Build Status 鈥? 鈥?
Linux (LoongArch) 鈥? Build Status 鈥? 鈥?
Windows Build Status Build Status 鈥? Build Status
Windows (ARM) Build Status Build Status 鈥? 鈥?
macOS 鈥? Build Status 鈥? Build Status
macOS (ARM) 鈥? Build Status 鈥? Build Status
Android Build Status Build Status Build Status Build Status
Android-x86 Build Status Build Status Build Status Build Status
iOS Build Status Build Status 鈥? Build Status
iOS Simulator Build Status Build Status 鈥? Build Status
WebAssembly Build Status 鈥? 鈥? 鈥?
RISC-V GCC/Newlib Build Status Build Status 鈥? 鈥?

Support most commonly used CNN network

鏀寔澶ч儴鍒嗗父鐢ㄧ殑 CNN 缃戠粶


HowTo

how to build ncnn library on Linux / Windows / macOS / Raspberry Pi3, Pi4 / POWER / Android / NVIDIA Jetson / iOS / WebAssembly / AllWinner D1 / Loongson 2K1000

download prebuild binary package for android and ios

use ncnn with alexnet with detailed steps, recommended for beginners :)

ncnn 缁勪欢浣跨敤鎸囧寳 alexnet 闄勫甫璇︾粏姝ラ锛屾柊浜哄己鐑堟帹鑽? :)

use netron for ncnn model visualization

out-of-the-box web model conversion

ncnn low-level operation api

ncnn param and model file spec

ncnn operation param weight table

how to implement custom layer step by step


FAQ

ncnn throw error

ncnn produce wrong result

ncnn vulkan


Features

  • Supports convolutional neural networks, supports multiple input and multi-branch structure, can calculate part of the branch
  • No third-party library dependencies, does not rely on BLAS / NNPACK or any other computing framework
  • Pure C++ implementation, cross-platform, supports Android, iOS and so on
  • ARM NEON assembly level of careful optimization, calculation speed is extremely high
  • Sophisticated memory management and data structure design, very low memory footprint
  • Supports multi-core parallel computing acceleration, ARM big.LITTLE CPU scheduling optimization
  • Supports GPU acceleration via the next-generation low-overhead Vulkan API
  • Extensible model design, supports 8bit quantization and half-precision floating point storage, can import caffe/pytorch/mxnet/onnx/darknet/keras/tensorflow(mlir) models
  • Support direct memory zero copy reference load network model
  • Can be registered with custom layer implementation and extended
  • Well, it is strong, not afraid of being stuffed with 鍗? QvQ

鍔熻兘姒傝堪

  • 鏀寔鍗风Н绁炵粡缃戠粶锛屾敮鎸佸杈撳叆鍜屽鍒嗘敮缁撴瀯锛屽彲璁$畻閮ㄥ垎鍒嗘敮
  • 鏃犱换浣曠涓夋柟搴撲緷璧栵紝涓嶄緷璧? BLAS/NNPACK 绛夎绠楁鏋?
  • 绾? C++ 瀹炵幇锛岃法骞冲彴锛屾敮鎸? Android / iOS 绛?
  • ARM Neon 姹囩紪绾ц壇蹇冧紭鍖栵紝璁$畻閫熷害鏋佸揩
  • 绮剧粏鐨勫唴瀛樼鐞嗗拰鏁版嵁缁撴瀯璁捐锛屽唴瀛樺崰鐢ㄦ瀬浣?
  • 鏀寔澶氭牳骞惰璁$畻鍔犻�燂紝ARM big.LITTLE CPU 璋冨害浼樺寲
  • 鏀寔鍩轰簬鍏ㄦ柊浣庢秷鑰楃殑 Vulkan API GPU 鍔犻�?
  • 鍙墿灞曠殑妯″瀷璁捐锛屾敮鎸? 8bit 閲忓寲 鍜屽崐绮惧害娴偣瀛樺偍锛屽彲瀵煎叆 caffe/pytorch/mxnet/onnx/darknet/keras/tensorflow(mlir) 妯″瀷
  • 鏀寔鐩存帴鍐呭瓨闆舵嫹璐濆紩鐢ㄥ姞杞界綉缁滄ā鍨?
  • 鍙敞鍐岃嚜瀹氫箟灞傚疄鐜板苟鎵╁睍
  • 鎭╋紝寰堝己灏辨槸浜嗭紝涓嶆�曡濉炲嵎 QvQ

supported platform matrix

  • 鉁? = known work and runs fast with good optimization
  • 鉁旓笍 = known work, but speed may not be fast enough
  • 鉂? = shall work, not confirmed
  • / = not applied
Windows Linux Android macOS iOS
intel-cpu 鉁旓笍 鉁旓笍 鉂? 鉁旓笍 /
intel-gpu 鉁旓笍 鉁旓笍 鉂? 鉂? /
amd-cpu 鉁旓笍 鉁旓笍 鉂? 鉁旓笍 /
amd-gpu 鉁旓笍 鉁旓笍 鉂? 鉂? /
nvidia-gpu 鉁旓笍 鉁旓笍 鉂? 鉂? /
qcom-cpu 鉂? 鉁旓笍 鉁? / /
qcom-gpu 鉂? 鉁旓笍 鉁旓笍 / /
arm-cpu 鉂? 鉂? 鉁? / /
arm-gpu 鉂? 鉂? 鉁旓笍 / /
apple-cpu / / / 鉁旓笍 鉁?
apple-gpu / / / 鉁旓笍 鉁旓笍
ibm-cpu / 鉁旓笍 / / /

Project examples



License

BSD 3 Clause

About

ncnn is a high-performance neural network inference framework optimized for the mobile platform

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 47.8%
  • C 37.9%
  • GLSL 8.2%
  • Python 4.6%
  • CMake 1.1%
  • Makefile 0.3%
  • Other 0.1%