-
Notifications
You must be signed in to change notification settings - Fork 216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add node-llama-cpp
library and CLI to GGUF models
#949
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -448,6 +448,13 @@ export const MODEL_LIBRARIES_UI_ELEMENTS = { | |||
filter: true, | |||
countDownloads: `path_extension:"nemo" OR path:"model_config.yaml"`, | |||
}, | |||
"node-llama-cpp": { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are there any examples of where this is used? I don't see any tagged repos here: https://huggingface.co/models?other=node-llama-cpp
Note: Whichever repo has the tag node-llama-cpp
would have the snippet you defined.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should only have node-llama-cpp
as a local app option, not a library, for consistency with the rest
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes! read this: #949 (review)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agree
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can use node-llama-cpp
to run llama.cpp
inside a Node.js project in a similar manner that you can use llama-cpp-python
to run llama.cpp
inside a Python project, so I think it's useful to have a code snippet for this.
I see that llama-cpp-python
appears on all llama.cpp
compatible GGUF model repos, regardless of whether the repo has a llama-cpp-python
tag (for example, this repo).
Since node-llama-cpp
also makes it possible to use it as a CLI to interact with models without installing anything,
I think both cases are worth having an easy to use snippet.
Is there a way to make both accessible?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO let's first ship in local apps and we can always revisit later
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alright. I removed the library code snippet
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice 🔥
@@ -448,6 +448,13 @@ export const MODEL_LIBRARIES_UI_ELEMENTS = { | |||
filter: true, | |||
countDownloads: `path_extension:"nemo" OR path:"model_config.yaml"`, | |||
}, | |||
"node-llama-cpp": { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should only have node-llama-cpp
as a local app option, not a library, for consistency with the rest
@Vaibhavs10 You can use this SVG icon in places that need a small icon, and this SVG logo for places that need a medium-sized logo (note that this logo's colors change depending on the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for iterating here @giladgd - really appreciate it! We'll have a new release of LocalApps soon. There's an update required from our end (on our internal codebase) - this PR will be merged along with that!
(Internal note: the SVGs are present here: #949 (comment))
cc: @enzostvs @krampstudio for vis 🤗
@Vaibhavs10 Do you know when it will show up on GGUF models? |
@giladgd it's been deployed now! cc @Vaibhavs10 |
@julien-c I don't see it on the drop-down menu on my end (also when I'm not logged in). It should be available on all models where |
you need to select it in hf.co/settings/local-apps |
Is it possible to show it by default alongside |
This PR adds a code snippet of using
node-llama-cpp
on GGUF models, and also CLI commands to:node-llama-cpp
makes usingllama.cpp
in Node.js easy, and also provides convenient CLI commands that can be used without even installing it (assuming Node.js is installed).I couldn't find where to add the icon for
node-llama-cpp
, so in case it's not in this repo, you can use this icon.