Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add node-llama-cpp library and CLI to GGUF models #949

Merged
merged 5 commits into from
Oct 15, 2024

Conversation

giladgd
Copy link
Contributor

@giladgd giladgd commented Oct 5, 2024

This PR adds a code snippet of using node-llama-cpp on GGUF models, and also CLI commands to:

  • Chat with the model
  • Estimate the model compatibility with the current hardware without downloading it

node-llama-cpp makes using llama.cpp in Node.js easy, and also provides convenient CLI commands that can be used without even installing it (assuming Node.js is installed).

I couldn't find where to add the icon for node-llama-cpp, so in case it's not in this repo, you can use this icon.

Copy link
Member

@Vaibhavs10 Vaibhavs10 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR @giladgd - It's already looking very good.

re: LocalApps can you send the SVG of the logo for node-llama-cpp as well.

More meta comment - I think this would fit more generally for LocalApps atm, but let's wait to see what others think as well: cc: @julien-c

packages/tasks/src/local-apps.ts Show resolved Hide resolved
packages/tasks/src/local-apps.ts Show resolved Hide resolved
@@ -448,6 +448,13 @@ export const MODEL_LIBRARIES_UI_ELEMENTS = {
filter: true,
countDownloads: `path_extension:"nemo" OR path:"model_config.yaml"`,
},
"node-llama-cpp": {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there any examples of where this is used? I don't see any tagged repos here: https://huggingface.co/models?other=node-llama-cpp

Note: Whichever repo has the tag node-llama-cpp would have the snippet you defined.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should only have node-llama-cpp as a local app option, not a library, for consistency with the rest

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes! read this: #949 (review)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agree

Copy link
Contributor Author

@giladgd giladgd Oct 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can use node-llama-cpp to run llama.cpp inside a Node.js project in a similar manner that you can use llama-cpp-python to run llama.cpp inside a Python project, so I think it's useful to have a code snippet for this.
I see that llama-cpp-python appears on all llama.cpp compatible GGUF model repos, regardless of whether the repo has a llama-cpp-python tag (for example, this repo).

Since node-llama-cpp also makes it possible to use it as a CLI to interact with models without installing anything,
I think both cases are worth having an easy to use snippet.

Is there a way to make both accessible?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO let's first ship in local apps and we can always revisit later

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright. I removed the library code snippet

Copy link
Member

@osanseviero osanseviero left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice 🔥

packages/tasks/src/local-apps.ts Show resolved Hide resolved
packages/tasks/src/local-apps.ts Show resolved Hide resolved
@@ -448,6 +448,13 @@ export const MODEL_LIBRARIES_UI_ELEMENTS = {
filter: true,
countDownloads: `path_extension:"nemo" OR path:"model_config.yaml"`,
},
"node-llama-cpp": {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should only have node-llama-cpp as a local app option, not a library, for consistency with the rest

@giladgd
Copy link
Contributor Author

giladgd commented Oct 7, 2024

@Vaibhavs10 You can use this SVG icon in places that need a small icon, and this SVG logo for places that need a medium-sized logo (note that this logo's colors change depending on the color-scheme CSS property used on the image)

Copy link
Member

@Vaibhavs10 Vaibhavs10 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for iterating here @giladgd - really appreciate it! We'll have a new release of LocalApps soon. There's an update required from our end (on our internal codebase) - this PR will be merged along with that!

(Internal note: the SVGs are present here: #949 (comment))

cc: @enzostvs @krampstudio for vis 🤗

@Vaibhavs10 Vaibhavs10 merged commit 683cbd0 into huggingface:main Oct 15, 2024
3 of 4 checks passed
@giladgd giladgd deleted the node-llama-cpp-option branch October 17, 2024 13:13
@giladgd
Copy link
Contributor Author

giladgd commented Oct 17, 2024

@Vaibhavs10 Do you know when it will show up on GGUF models?
I saw that you added Ollama yesterday, but node-llama-cpp is not there.

@julien-c
Copy link
Member

@giladgd it's been deployed now!

image

cc @Vaibhavs10

@giladgd
Copy link
Contributor Author

giladgd commented Oct 18, 2024

@julien-c I don't see it on the drop-down menu on my end (also when I'm not logged in).
I've tested on this model:
image

It should be available on all models where llama.cpp is available, since it supports all llama.cpp models.

@julien-c
Copy link
Member

you need to select it in hf.co/settings/local-apps

@giladgd
Copy link
Contributor Author

giladgd commented Oct 18, 2024

Is it possible to show it by default alongside llama.cpp and Ollama?
node-llama-cpp makes it easy to run llama.cpp models locally without installing anything using a single npx command, so I think having it being shown by default and also for unauthenticated users would be very helpful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants