Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: Update to include guide for OpenAI LLMs #3552

Merged
merged 3 commits into from
Aug 23, 2024
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/modules/usage/llms/llms.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,10 @@ The following environment variables might be necessary for some LLMs/providers:

We have a few guides for running OpenHands with specific model providers:

- [OpenAI](llms/openai-llms)
- [ollama](llms/local-llms)
- [Azure](llms/azure-llms)
- [Google](llms/google-llms)
tobitege marked this conversation as resolved.
Show resolved Hide resolved

If you're using another provider, we encourage you to open a PR to share your setup!

Expand Down
75 changes: 75 additions & 0 deletions docs/modules/usage/llms/openai-llms.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
# OpenAI

OpenHands uses [LiteLLM](https://www.litellm.ai/) to make calls to OpenAI's chat models. You can find their full documentation on OpenAI chat calls [here](https://docs.litellm.ai/docs/providers/openai).

## Configuration

### Manual Configuration

When running the OpenHands Docker image, you'll need to set the following environment variables:

```sh
LLM_MODEL="openai/<gpt-model-name>" # e.g. "openai/gpt-4o"
LLM_API_KEY="<your-openai-project-api-key>"
```

To see a full list of OpenAI models that LiteLLM supports, please visit https://docs.litellm.ai/docs/providers/openai#openai-chat-completion-models.

To find or create your OpenAI Project API Key, please visit https://platform.openai.com/api-keys.

**Example**:

```sh
export WORKSPACE_BASE=$(pwd)/workspace

docker run -it \
--pull=always \
-e SANDBOX_USER_ID=$(id -u) \
-e LLM_MODEL="openai/<gpt-model-name>" \
-e LLM_API_KEY="<your-openai-project-api-key>" \
-e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
-v $WORKSPACE_BASE:/opt/workspace_base \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name openhands-app-$(date +%Y%m%d%H%M%S) \
ghcr.io/opendevin/opendevin:0.8
```

### UI Configuration

You can also directly set the `LLM_MODEL` and `LLM_API_KEY` in the OpenHands client itself. Follow this guide to get up and running with the OpenHands client.

From there, you can set your model and API key in the settings window.

## Using OpenAI-Compatible Endpoints

Just as for OpenAI Chat completions, we use LiteLLM for OpenAI-compatible endpoints. You can find their full documentation on this topic [here](https://docs.litellm.ai/docs/providers/openai_compatible).

When running the OpenHands Docker image, you'll need to set the following environment variables:

```sh
LLM_BASE_URL="<api-base-url>" # e.g. "http://0.0.0.0:4000"
amanape marked this conversation as resolved.
Show resolved Hide resolved
amanape marked this conversation as resolved.
Show resolved Hide resolved
LLM_MODEL="openai/<model-name>" # e.g. "openai/mistral"
LLM_API_KEY="<your-api-key>"
```

**Example**:

```sh
export WORKSPACE_BASE=$(pwd)/workspace

docker run -it \
--pull=always \
-e SANDBOX_USER_ID=$(id -u) \
-e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
-e LLM_BASE_URL="<api-base-url>" \
-e LLM_MODEL="openai/<model-name>" \
-e LLM_API_KEY="<your-api-key>" \
-v $WORKSPACE_BASE:/opt/workspace_base \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name openhands-app-$(date +%Y%m%d%H%M%S) \
ghcr.io/opendevin/opendevin:0.8
```