Skip to content

Commit

Permalink
add more LLM docs and improve format
Browse files Browse the repository at this point in the history
  • Loading branch information
ShilinHe committed Dec 20, 2023
1 parent 37cbbb4 commit 49cfd97
Show file tree
Hide file tree
Showing 14 changed files with 149 additions and 82 deletions.
Empty file added website/docs/FAQ.md
Empty file.
30 changes: 30 additions & 0 deletions website/docs/llms/aoai.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
---
description: Using LLMs from OpenAI/AOAI
---
# Azure OpenAI

1. Create an account on [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service) and get your API key.
2. Add the following to your `taskweaver_config.json` file:
```json showLineNumbers
{
"llm.api_base":"YOUR_AOAI_ENDPOINT",
"llm.api_key":"YOUR_API_KEY",
"llm.api_type":"azure",
"llm.auth_mode":"api-key",
"llm.model":"gpt-4-1106-preview",
"llm.response_format": "json_object"
}
```

:::tip
`llm.model` is the model name you want to use.
You can find the list of models [here](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models).
:::

:::info
For `gpt-4-1106-preview` and `gpt-3.5-turbo-1106`, `llm.response_format` can be set to `json_object`.
However, for the earlier models, which do not support JSON response explicitly, `llm.response_format` should be set to `null`.
:::

3. Start TaskWeaver and chat with TaskWeaver.
You can refer to the [Quick Start](../quickstart.md) for more details.
18 changes: 18 additions & 0 deletions website/docs/llms/gemini.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# Gemini

1. Create an account on [Google AI](https://ai.google.dev/) and get your API key.
2. Add the following content to your `taskweaver_config.json` file:
```json showLineNumbers
{
"llm.api_type": "google_genai",
"llm.google_genai.api_key": "YOUR_API_KEY",
"llm.google_genai.model": "gemini-pro"
}
```


3. Start TaskWeaver and chat with TaskWeaver.
You can refer to the [Quick Start](../quickstart.md) for more details.



1 change: 0 additions & 1 deletion website/docs/llms/geni.md

This file was deleted.

34 changes: 33 additions & 1 deletion website/docs/llms/liteLLM.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,36 @@ description: Using LLMs from LiteLLM
---


# LiteLLM
# LiteLLM

:::info
[LiteLLM](https://docs.litellm.ai/) provides a unified interface to call 100+ LLMs using the same Input/Output format, including OpenAI, Huggingface, Anthropic, vLLM, Cohere, and even custom LLM API server. Taking LiteLLM as the bridge, many LLMs can be onboarded to TaskWeaver. Here we use the OpenAI Proxy Server provided by LiteLLM to make configuration.
:::

1. Install LiteLLM Proxy and configure the LLM server by following the instruction [here](https://docs.litellm.ai/docs/proxy/quick_start). In general, there are a few steps:
1. Install the package `pip install litellm[proxy]`
2. Setup the API key and other necessary environment variables which vary by LLM. Taking [Cohere](https://cohere.com/) as an example, it is required to setup `export COHERE_API_KEY=my-api-key`.
3. Run LiteLLM proxy server by `litellm --model MODEL_NAME --drop_params`, for example, in Cohere, the model name can be `command-nightly`. The `drop-params` argument is used to ensure the API compatibility. Then, a server will be automatically started on `http://0.0.0.0:8000`.

:::tip
The full list of supported models by LiteLLM can be found in the [page](https://docs.litellm.ai/docs/providers).
:::


2. Add the following content to your `taskweaver_config.json` file:

```json showLineNumbers
{
"llm.api_base": "http://0.0.0.0:8000",
"llm.api_key": "anything",
"llm.model": "gpt-3.5-turbo"
}
```

:::info
`llm.api_key` and `llm.model` are mainly used as placeholder for API call, whose actual values are not used. If the configuration does not work, please refer to LiteLLM [documents](https://docs.litellm.ai/docs/proxy/quick_start) to locally test whether you can send requests to the LLM.
:::


3. Open a new terminal, start TaskWeaver and chat.
You can refer to the [Quick Start](../quickstart.md) for more details.
23 changes: 19 additions & 4 deletions website/docs/llms/ollama.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,25 @@
# Ollama

## How to use Ollama LLM API
1. Go to [Ollama](https://github.com/jmorganca/ollama) and follow the instructions to serve a LLM model on your local environment.
We provide a short example to show how to configure the ollama in the following, which might change if ollama makes updates.

```bash title="install ollama and serve LLMs in local" showLineNumbers
## Install ollama on Linux & WSL2
curl https://ollama.ai/install.sh | sh
## Run the serving
ollama serve
## Open another terminal and run the model
ollama run llama2
```
:::tip
We recommend deploying the LLM with a parameter scale exceeding 13B for enhanced performance (such as Llama 2 13B).
:::
:::info
When serving LLMs via Ollama, it will by default start a server at `http://localhost:11434`, which will later be used as the API base in `taskweaver_config.json`.
:::

1. Go to [Ollama](https://github.com/jmorganca/ollama) and follow the instructions to set up a LLM model on your local environment.
We recommend deploying the LLM with a parameter scale exceeding 13 billion for enhanced performance.
2. Add following configuration to `taskweaver_config.json`:
```json
```json showLineNumbers
{
"llm.api_base": "http://localhost:11434",
"llm.api_key": "ARBITRARY_STRING",
Expand All @@ -14,5 +28,6 @@ We recommend deploying the LLM with a parameter scale exceeding 13 billion for e
}
```
NOTE: `llm.api_base` is the URL started in the Ollama LLM server and `llm.model` is the model name of Ollama LLM.

3. Start TaskWeaver and chat with TaskWeaver.
You can refer to the [Quick Start](../quickstart.md) for more details.
27 changes: 27 additions & 0 deletions website/docs/llms/openai.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
---
description: Using LLMs from OpenAI
---
# OpenAI

1. Create an account on [OpenAI](https://beta.openai.com/) and get your [API key](https://platform.openai.com/api-keys).
2. Add the following to your `taskweaver_config.json` file:
```json showLineNumbers
{
"llm.api_type":"openai",
"llm.api_base": "https://api.openai.com/v1",
"llm.api_key": "YOUR_API_KEY",
"llm.model": "gpt-4-1106-preview",
"llm.response_format": "json_object"
}
```
:::tip
`llm.model` is the model name you want to use.
You can find the list of models [here](https://platform.openai.com/docs/models).
:::

:::info
For `gpt-4-1106-preview` and `gpt-3.5-turbo-1106`, `llm.response_format` can be set to `json_object`.
However, for the earlier models which do not support JSON response explicitly, `llm.response_format` should be set to `null`.
:::
3. Start TaskWeaver and chat with TaskWeaver.
You can refer to the [Quick Start](../quickstart.md) for more details.
62 changes: 0 additions & 62 deletions website/docs/llms/openai.mdx

This file was deleted.

11 changes: 6 additions & 5 deletions website/docs/llms/qwen.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
# QWen

## How to use QWen API

1. Go to [QWen](https://help.aliyun.com/zh/dashscope/developer-reference/activate-dashscope-and-create-an-api-key?spm=a2c4g.11186623.0.0.7b5749d72j3SYU) and register an account and get the API key.
2. Run `pip install dashscope` to install the required packages.
1. QWen (Tongyi Qianwen) is a LLM developed by Alibaba. Go to [QWen](https://dashscope.aliyun.com/) and register an account and get the API key. More details can be found [here](https://help.aliyun.com/zh/dashscope/developer-reference/activate-dashscope-and-create-an-api-key?spm=a2c4g.11186623.0.0.7b5749d72j3SYU) (in Chinese).
2. Install the required packages dashscope.
```bash
pip install dashscope
```
3. Add the following configuration to `taskweaver_config.json`:
```json
```json showLineNumbers
{
"llm.api_type": "qwen",
"llm.model": "qwen-max",
Expand Down
6 changes: 3 additions & 3 deletions website/docs/planner.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,17 +33,17 @@ CodeInterpreter
>>> [INIT_PLAN]
1. ask Code Interpreter to handle the request; 2. report the result to user <interactively depends on 1>
>>> [PLAN]
1. ask Code Interpreter to handle user's request; 2. report the result to user
1. ask Code Interpreter to handle user\'s request; 2. report the result to user
>>> [CURRENT_PLAN_STEP]
1. ask Code Interpreter to handle the request
>>> [PLANNER->CODEINTERPRETER]
Please process this request: generate 10 random numbers
>>> [PYTHON]tarting... <=�=>
>>> [PYTHON]Starting...
random_numbers = np.random.rand(10)
random_numbers
>>> [VERIFICATION]
NONE
>>> [STATUS]tarting... <=�=>
>>> [STATUS]Starting...
SUCCESS
>>> [RESULT]
The execution of the generated python code above has succeeded
Expand Down
2 changes: 1 addition & 1 deletion website/docs/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ python -m taskweaver -p ./project/
This will start the TaskWeaver process and you can interact with it through the command line interface.
If everything goes well, you will see the following prompt:

```
```bash
=========================================================
_____ _ _ __
|_ _|_ _ ___| | _ | | / /__ ____ __ _____ _____
Expand Down
8 changes: 4 additions & 4 deletions website/docs/session.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ You can refer to [taskweaver_as_a_lib](taskweaver_as_a_lib.md) to see how to set
- `code_interpreter_only`: allow users to directly communicate with the Code Interpreter.
In this mode, users can only send messages to the Code Interpreter and receive messages from the Code Interpreter.
Here is an example:
`````bash
``````bash
=========================================================
_____ _ _ __
|_ _|_ _ ___| | _ | | / /__ ____ __ _____ _____
Expand All @@ -22,13 +22,13 @@ You can refer to [taskweaver_as_a_lib](taskweaver_as_a_lib.md) to see how to set
=========================================================
TaskWeaver: I am TaskWeaver, an AI assistant. To get started, could you please enter your request?
Human: generate 10 random numbers
>>> [PYTHON]tarting... <=�=> >
>>> [PYTHON]Starting...
import numpy as np
random_numbers = np.random.rand(10)
random_numbers
>>> [VERIFICATION]
NONE
>>> [STATUS]tarting... <=�=>
>>> [STATUS]Starting...
SUCCESS
>>> [RESULT]
The execution of the generated python code above has succeeded
Expand All @@ -54,7 +54,7 @@ TaskWeaver: The following python code has been executed:
import numpy as np
random_numbers = np.random.rand(10)
random_numbers
\```
```

The execution of the generated python code above has succeeded

Expand Down
1 change: 1 addition & 0 deletions website/docusaurus.config.js
Original file line number Diff line number Diff line change
Expand Up @@ -140,6 +140,7 @@ const config = {
prism: {
darkTheme: prismThemes.github,
theme: prismThemes.dracula,
additionalLanguages: ['bash', 'json', 'yaml'],
},
}),
themes: [
Expand Down
8 changes: 7 additions & 1 deletion website/sidebars.js
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,15 @@ const sidebars = {
{
type: 'category',
label: 'LLMs',
link: {
type: 'generated-index',
title: 'LLMs',
description: 'Learn how to call models from different LLMs',
slug: '/llms',
},
collapsible: true,
collapsed: false,
items: ['llms/index', 'llms/openai', 'llms/liteLLM', 'llms/ollama', 'llms/geni', 'llms/qwen'],
items: ['llms/openai', 'llms/aoai', 'llms/liteLLM', 'llms/ollama', 'llms/gemini', 'llms/qwen'],
},

{
Expand Down

0 comments on commit 49cfd97

Please sign in to comment.