Skip to content

Ollama alternative for Rockchip NPU: An efficient solution for running AI and Deep learning models on Rockchip devices with optimized NPU support

License

Notifications You must be signed in to change notification settings

NotPunchnox/rkllama

Repository files navigation

RKLLama: LLM Server and Client for Rockchip 3588/3576

French version: click

Overview

A server to run and interact with LLM models optimized for Rockchip RK3588(S) and RK3576 platforms. The difference from other software of this type like Ollama or Llama.cpp is that RKLLama allows models to run on the NPU.

  • Version Lib rkllm-runtime: V1.1.4.
  • Tested on an Orange Pi 5 Pro (16GB RAM).

File Structure

  • ./models: Place your .rkllm models here.
  • ./lib: C++ rkllm library used for inference and fix_freqence_platform.
  • ./app.py: API Rest server.
  • ./client.py: Client to interact with the server.

Supported Python Versions:

  • Python 3.8
  • Python 3.9

Tested Hardware and Environment

  • Hardware: Orange Pi 5 Pro: (Rockchip RK3588S, NPU 6 TOPS).
  • OS: Ubuntu 24.04 arm64.

Main Features

  • Running models on NPU.
  • Listing available models.
  • Dynamic loading and unloading of models.
  • Inference requests.
  • Streaming and non-streaming modes.
  • Message history.

Documentation

Installation

  1. Download RKLLama:
git clone https://github.com/notpunchnox/rkllama
cd rkllama
  1. Install RKLLama
chmod +x setup.sh
sudo ./setup.sh

Output: Image

Add Model (file.rkllm)

  1. Download .rkllm models from HuggingFace, or convert your GGUF models to RKLLM (Software soon available on my GitHub)

  2. Go to the ~/RKLLAMA directory and place your files there

    cd ~/RKLLAMA/

Usage

Run Server

Virtualization with conda is started automatically, as well as the NPU frequency setting.

  1. Start the server
rkllama serve

Output: Image

Run Client

  1. Command to start the client
rkllama

or

rkllama help

Output: Image

  1. See the available models
rkllama list

Output: Image

  1. Run a model
rkllama run <model_name>

Output: Image

Then start chatting Image

Uninstall

  1. Go to the ~/RKLLAMA/ folder

    cd ~/RKLLAMA/
    cp ./uninstall.sh ../
    cd ../ && chmod +x ./uninstall.sh && ./uninstall.sh
  2. If you don't have the uninstall.sh file:

    wget https://raw.githubusercontent.com/NotPunchnox/rkllama/refs/heads/main/uninstall.sh
    chmod +x ./uninstall.sh
    ./uninstall.sh

Output: Image


Upcoming Features

  • Ability to pull models
  • Add multimodal models
  • Add embedding models
  • GGUF to RKLLM conversion software

System Monitor:


Author:

notpunchnox

About

Ollama alternative for Rockchip NPU: An efficient solution for running AI and Deep learning models on Rockchip devices with optimized NPU support

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published