To run:
- python3 scripts/tokenizer.py
- python3 scripts/export.py state-spaces/mamba-130m models/model.bin
- make fast
- ./build/mamba models/model.bin -n 20 -i "Customer Support should" -t 0.0
Command line arguments will be used to control inference, for example, quantization level, debugging verbosity, input prompt.
Model configuration will be done through model_config.yaml, for example, temperature (text diversity), generated text amount, batch size. There may be multiple selectable configurations, these are selected through the command line arguments.
-
Initial C++ Implementation
-
Quantization
-
1-bit weight experimentation
-
Speculative Decoding
- draft model fine tuning for jamba
-
Flash mem
- neuron activation data
- hot and cold neurons prediction
-
Matrix mult optimization and overall optimization
Implementation of some optimization techniques
https://github.com/MDK8888/GPTFast/tree/master
Mamba LLM
https://github.com/redotvideo/mamba-chat
https://arxiv.org/abs/2310.04564
https://arxiv.org/abs/2312.11514
https://arxiv.org/abs/2402.11131
https://arxiv.org/abs/2211.17192
https://arxiv.org/abs/2402.17764
state-spaces/mamba#133 (only quantize nn.linear)
https://huggingface.co/docs/transformers/v4.33.0/en/main_classes/quantization
https://leimao.github.io/article/Neural-Networks-Quantization/
https://coffeebeforearch.github.io/2020/06/23/mmul.html
Inference of Mamba models in pure C
Inspired by and using code from llama2.c
This implements only the recurrent mode of Mamba SSM
You can compare it with the related pytorch implementation
No support for batches. The code is minimal for learning purposes.
Even so, it is faster than pytorch on CPU!!!
You can use these models stored on HuggingFace:
state-spaces/mamba-130m
state-spaces/mamba-370m
state-spaces/mamba-790m
state-spaces/mamba-1.4b
state-spaces/mamba-2.8b
state-spaces/mamba-2.8b-slimpj
You can specify the model name as an argument to the export.py
script
Note that the export script will download the model (if it's not already downloaded) to the hugingface cache directory.
Optionally you can also specify the path to the model file, if you downloaded it manually. Example:
wget https://huggingface.co/state-spaces/mamba-130m/resolve/main/config.json?download=true -O config.json
wget https://huggingface.co/state-spaces/mamba-130m/resolve/main/pytorch_model.bin?download=true -O pytorch_model.bin
python3 export.py . model.bin
As it is a recurrent model, it is possible to save the internal state and then return to that state later
To get a copy of the internal state:
int state_size;
char* state = get_internal_state(mamba, &state_size);
To set the internal state:
set_internal_state(mamba, state, state_size);