Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request: NUMA support #660

Open
frz121 opened this issue Feb 4, 2024 · 1 comment
Open

Request: NUMA support #660

frz121 opened this issue Feb 4, 2024 · 1 comment
Labels
enhancement New feature or request

Comments

@frz121
Copy link

frz121 commented Feb 4, 2024

Could you please add NUMA support to the application?
Essentially, this is just the "--numa" option for llamaсpp.
This allows models to run on multiprocessor systems (linux based) with significant acceleration.

@LostRuins LostRuins added the enhancement New feature or request label Feb 8, 2024
@rogerfachini
Copy link

With the release of Deepseek, those of us running on CPU with EPYC chips would get a noticeable performance benefit with this. I've been testing locally using numactl with koboldcpp to force NUMA aware scheduling (discussed in the llama.cpp repo here: ggerganov#1437).

Having this feature available as a flag in kobold.cpp similar to llama.cpp would be quite helpful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants