Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: improve documentation #324

Merged
merged 1 commit into from
Sep 21, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .vitepress/components/BlogEntry/BlogEntry.vue
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ const dateText = new Date(props.date).toLocaleDateString("en-US", {
}"
/>
</a>
<p class="description">{{ props.description }}</p>
<p class="description" v-html="props.description"></p>
<a class="readMore" :href="withBase(props.link)">
Read more
<span class="vpi-arrow-right"></span>
Expand Down
14 changes: 12 additions & 2 deletions .vitepress/components/HomePage/HomePage.vue
Original file line number Diff line number Diff line change
Expand Up @@ -244,6 +244,16 @@ getElectronExampleAppDownloadLink()
}
}

:global(.VPHome .VPHero .container .main) {
&:global(>.name) {
font-weight: 701;
}

&:global(>.text) {
font-weight: 699;
}
}

:global(html.start-animation) {
.content {
transition: opacity 0.5s 0.25s, transform 0.5s 0.25s, translate 0.5s, display 1s ease-in-out;
Expand Down Expand Up @@ -292,7 +302,7 @@ getElectronExampleAppDownloadLink()
}
}

&:global(> .text) {
&:global(>.text) {
transition: font-weight 0.5s ease-in-out;

@starting-style {
Expand All @@ -301,7 +311,7 @@ getElectronExampleAppDownloadLink()
}
}

&:global(> .tagline) {
&:global(>.tagline) {
transition: transform 0.5s ease-in-out;

@starting-style {
Expand Down
17 changes: 16 additions & 1 deletion .vitepress/config.ts
Original file line number Diff line number Diff line change
Expand Up @@ -324,7 +324,22 @@ export default defineConfig({
search: {
provider: "local",
options: {
detailedView: true
detailedView: true,
miniSearch: {
searchOptions: {
boostDocument(term, documentId, storedFields) {
const firstTitle = (storedFields?.titles as string[])?.[0];
if (firstTitle?.startsWith("Type Alias: "))
return -0.8;
else if (firstTitle?.startsWith("Class: "))
return -0.9;
else if (firstTitle?.startsWith("Function: "))
return -0.95;

return 1;
}
}
}
}
},
sidebar: {
Expand Down
4 changes: 4 additions & 0 deletions .vitepress/theme/style.css
Original file line number Diff line number Diff line change
Expand Up @@ -547,6 +547,10 @@ html.blog-page .vp-doc h2 {
border-top: none;
}

html.blog-page .vp-doc>div>hr:first-of-type {
display: none;
}

/*#VPContent {*/
/* background-image: radial-gradient(1200px 380px at 50% 0%, color-mix(in srgb, var(--vp-c-brand-1) 32%, transparent), transparent 64%);*/
/*}*/
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
* A Complete suite of everything you need to use LLMs in your projects
* [Use the CLI to chat with a model without writing any code](#try-it-without-installing)
* Up-to-date with the latest `llama.cpp`. Download and compile the latest release with a [single CLI command](https://node-llama-cpp.withcat.ai//guide/building-from-source#downloading-a-release)
* Force a model to generate output in a parseable format, [like JSON](https://node-llama-cpp.withcat.ai/guide/chat-session#json-response), or even force it to [follow a specific JSON schema](https://node-llama-cpp.withcat.ai/guide/chat-session#response-json-schema)
* Enforce a model to generate output in a parseable format, [like JSON](https://node-llama-cpp.withcat.ai/guide/chat-session#json-response), or even force it to [follow a specific JSON schema](https://node-llama-cpp.withcat.ai/guide/chat-session#response-json-schema)
* [Provide a model with functions it can call on demand](https://node-llama-cpp.withcat.ai/guide/chat-session#function-calling) to retrieve information of perform actions
* [Embedding support](https://node-llama-cpp.withcat.ai/guide/embedding)
* Great developer experience with full TypeScript support, and [complete documentation](https://node-llama-cpp.withcat.ai/guide/)
Expand Down
7 changes: 6 additions & 1 deletion docs/blog/blog.data.ts
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
import {createContentLoader} from "vitepress";
import {ensureLocalImage} from "../../.vitepress/utils/ensureLocalImage.js";
import {htmlEscape} from "../../.vitepress/utils/htmlEscape.js";

const loader = {
async load() {
Expand All @@ -17,7 +18,11 @@ const loader = {
return {
title: post.frontmatter.title as string | undefined,
date: post.frontmatter.date as string | undefined,
description: post.excerpt || post.frontmatter.description as string | undefined,
description: post.excerpt || (
(post.frontmatter.description as string | undefined) != null
? htmlEscape(post.frontmatter.description as string)
: undefined
),
link: post.url,
image: await getImage(
typeof post.frontmatter.image === "string"
Expand Down
6 changes: 6 additions & 0 deletions docs/guide/Metal.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,12 @@ and when building from source on macOS on Apple Silicon Macs, Metal support is e

`llama.cpp` doesn't support Metal well on Intel Macs, so it is disabled by default on those machines.

<div class="info custom-block" style="padding-top: 8px">

[Accelerate framework](https://developer.apple.com/accelerate/) is always enabled on Mac.

</div>

## Toggling Metal Support {#building}
### Prerequisites
* [`cmake-js` dependencies](https://github.com/cmake-js/cmake-js#:~:text=projectRoot/build%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%5Bstring%5D-,Requirements%3A,-CMake)
Expand Down
24 changes: 24 additions & 0 deletions docs/guide/chat-session.md
Original file line number Diff line number Diff line change
Expand Up @@ -534,6 +534,30 @@ console.log("AI: " + res);
```

## Complete User Prompt {#complete-prompt}

<script setup lang="ts">
import {withBase} from "vitepress";
import {ref} from "vue";
import {
defaultDownloadElectronExampleAppLink,
getElectronExampleAppDownloadLink
} from "../../.vitepress/components/HomePage/utils/getElectronExampleAppDownloadLink.js";

const downloadElectronExampleAppLink = ref<string>(defaultDownloadElectronExampleAppLink);

getElectronExampleAppDownloadLink()
.then((link) => {
downloadElectronExampleAppLink.value = link;
});
</script>

<div class="info custom-block" style="padding-top: 8px;">

You can try this feature in the <a target="_blank" :href="downloadElectronExampleAppLink">example Electron app</a>.
Just type a prompt and see the completion generated by the model.

</div>

You can generate a completion to a given incomplete user prompt and let the model complete it.

The advantage of doing that on the chat session is that it will use the chat history as context for the completion,
Expand Down
2 changes: 1 addition & 1 deletion docs/guide/grammar.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Using Grammar
Use this to force a model to generate response in a specific format of text, like `JSON` for example.
Use this to enforce a model to generate response in a specific format of text, like `JSON` for example.

::: tip NOTE

Expand Down
3 changes: 2 additions & 1 deletion docs/guide/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@ as well as balances the default settings to get the best performance from your h
No need to manually configure anything.

**Metal:** Enabled by default on Macs with Apple Silicon. If you're using a Mac with an Intel chip, [you can manually enable it](./Metal.md).
[Accelerate framework](https://developer.apple.com/accelerate/) is always enabled.

**CUDA:** Used by default when support is detected. For more details, see the [CUDA guide](./CUDA.md).

Expand Down Expand Up @@ -126,7 +127,7 @@ console.log("AI: " + a2);


### Chatbot With JSON Schema {#chatbot-with-json-schema}
To force a model to generate output according to a JSON schema, use [`llama.createGrammarForJsonSchema()`](../api/classes/Llama.md#creategrammarforjsonschema).
To enforce a model to generate output according to a JSON schema, use [`llama.createGrammarForJsonSchema()`](../api/classes/Llama.md#creategrammarforjsonschema).

It'll force the model to generate output according to the JSON schema you provide, and it'll do it on the text generation level.

Expand Down
2 changes: 1 addition & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ features:
linkText: Learn more
- icon: <svg xmlns="http://www.w3.org/2000/svg" height="24" viewBox="0 -960 960 960" width="24" fill="currentColor"><path d="M600-160q-17 0-28.5-11.5T560-200q0-17 11.5-28.5T600-240h80q17 0 28.5-11.5T720-280v-80q0-38 22-69t58-44v-14q-36-13-58-44t-22-69v-80q0-17-11.5-28.5T680-720h-80q-17 0-28.5-11.5T560-760q0-17 11.5-28.5T600-800h80q50 0 85 35t35 85v80q0 17 11.5 28.5T840-560t28.5 11.5Q880-537 880-520v80q0 17-11.5 28.5T840-400t-28.5 11.5Q800-377 800-360v80q0 50-35 85t-85 35h-80Zm-320 0q-50 0-85-35t-35-85v-80q0-17-11.5-28.5T120-400t-28.5-11.5Q80-423 80-440v-80q0-17 11.5-28.5T120-560t28.5-11.5Q160-583 160-600v-80q0-50 35-85t85-35h80q17 0 28.5 11.5T400-760q0 17-11.5 28.5T360-720h-80q-17 0-28.5 11.5T240-680v80q0 38-22 69t-58 44v14q36 13 58 44t22 69v80q0 17 11.5 28.5T280-240h80q17 0 28.5 11.5T400-200q0 17-11.5 28.5T360-160h-80Z"/></svg>
title: Powerful features
details: Force a model to generate output according to a JSON schema, give a model functions it can call on demand, and much more
details: Enforce a model to generate output according to a JSON schema, provide a model with functions it can call on demand, and much more
link: /guide/grammar#json-schema
linkText: Learn more
---
Expand Down
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
{
"name": "node-llama-cpp",
"version": "0.1.0",
"description": "Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level",
"description": "Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level",
"main": "./dist/index.js",
"type": "module",
"types": "./dist/index.d.ts",
Expand Down
Loading