Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

documentation changes associated with UI changes and more consistency #3866

Merged
merged 7 commits into from
Sep 14, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions docs/modules/usage/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,9 +40,10 @@ After running the command above, you'll find OpenHands running at [http://localh
The agent will have access to the `./workspace` folder to do its work. You can copy existing code here, or change `WORKSPACE_BASE` in the
command to point to an existing folder.

Upon launching OpenHands, you'll see a settings modal. You must select an LLM backend using `Model`, and enter a corresponding `API Key`.
Upon launching OpenHands, you'll see a settings modal. You must select an `LLM Provider` and `LLM Model` and enter a corresponding `API Key`.
These can be changed at any time by selecting the `Settings` button (gear icon) in the UI.
If the required `Model` does not exist in the list, you can toggle `Use custom model` and manually enter it in the text box.
If the required `LLM Model` does not exist in the list, you can toggle `Advanced Options` and manually enter it in the `Custom Model` text box.
The `Advanced Options` also allow you to specify a `Base URL` if required.

<img src="/img/settings-screenshot.png" alt="settings-modal" width="340" />

Expand Down
2 changes: 1 addition & 1 deletion docs/modules/usage/how-to/cli-mode.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ LLM_MODEL="anthropic/claude-3-5-sonnet-20240620"
3. Set `LLM_API_KEY` to your API key:

```bash
LLM_API_KEY="abcde"
LLM_API_KEY="sk_test_12345"
```

4. Run the following Docker command:
Expand Down
2 changes: 1 addition & 1 deletion docs/modules/usage/how-to/headless-mode.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ LLM_MODEL="anthropic/claude-3-5-sonnet-20240620"
3. Set `LLM_API_KEY` to your API key:

```bash
LLM_API_KEY="abcde"
LLM_API_KEY="sk_test_12345"
```

4. Run the following Docker command:
Expand Down
48 changes: 18 additions & 30 deletions docs/modules/usage/llms/azure-llms.md
Original file line number Diff line number Diff line change
@@ -1,55 +1,43 @@
# Azure OpenAI LLM

## Completion

OpenHands uses LiteLLM for completion calls. You can find their documentation on Azure [here](https://docs.litellm.ai/docs/providers/azure).

### Azure openai configs
## Azure OpenAI Configuration

When running the OpenHands Docker image, you'll need to set the following environment variables using `-e`:
When running OpenHands, you'll need to set the following environment variable using `-e` in the
[docker run command](/modules/usage/getting-started#installation):

```
LLM_BASE_URL="<azure-api-base-url>" # e.g. "https://openai-gpt-4-test-v-1.openai.azure.com/"
enyst marked this conversation as resolved.
Show resolved Hide resolved
LLM_API_KEY="<azure-api-key>"
LLM_MODEL="azure/<your-gpt-deployment-name>"
LLM_API_VERSION="<api-version>" # e.g. "2024-02-15-preview"
LLM_API_VERSION="<api-version>" # e.g. "2023-05-15"
```

Example:
```bash
docker run -it \
--pull=always \
-e SANDBOX_USER_ID=$(id -u) \
-e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
-e LLM_BASE_URL="x.openai.azure.com" \
-e LLM_API_VERSION="2024-02-15-preview" \
-v $WORKSPACE_BASE:/opt/workspace_base \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name openhands-app-$(date +%Y%m%d%H%M%S) \
ghcr.io/all-hands-ai/openhands:main
docker run -it --pull=always \
-e LLM_API_VERSION="2023-05-15"
...
```

You can also set the model and API key in the OpenHands UI through the Settings.
Then set the following in the OpenHands UI through the Settings:

:::note
You can find your ChatGPT deployment name on the deployments page in Azure. It could be the same with the chat model
name (e.g. 'GPT4-1106-preview'), by default or initially set, but it doesn't have to be the same. Run OpenHands,
and when you load it in the browser, go to Settings and set model as above: "azure/&lt;your-actual-gpt-deployment-name&gt;".
If it's not in the list, you can open the Settings modal, switch to "Custom Model", and enter your model name.
You will need your ChatGPT deployment name which can be found on the deployments page in Azure. This is referenced as
&lt;deployment-name&gt; below.
:::

* Enable `Advanced Options`
* `Custom Model` to azure/&lt;deployment-name&gt;
* `Base URL` to your Azure API Base URL (Example: https://example-endpoint.openai.azure.com)
* `API Key`

## Embeddings

OpenHands uses llama-index for embeddings. You can find their documentation on Azure [here](https://docs.llamaindex.ai/en/stable/api_reference/embeddings/azure_openai/).

### Azure openai configs

The model used for Azure OpenAI embeddings is "text-embedding-ada-002".
You need the correct deployment name for this model in your Azure account.
### Azure OpenAI Configuration

When running OpenHands in Docker, set the following environment variables using `-e`:
When running OpenHands, set the following environment variables using `-e` in the
[docker run command](/modules/usage/getting-started#installation):

```
LLM_EMBEDDING_MODEL="azureopenai"
Expand Down
26 changes: 14 additions & 12 deletions docs/modules/usage/llms/google-llms.md
Original file line number Diff line number Diff line change
@@ -1,28 +1,30 @@
# Google Gemini/Vertex LLM

## Completion

OpenHands uses LiteLLM for completion calls. The following resources are relevant for using OpenHands with Google's LLMs:

- [Gemini - Google AI Studio](https://docs.litellm.ai/docs/providers/gemini)
- [VertexAI - Google Cloud Platform](https://docs.litellm.ai/docs/providers/vertex)

### Gemini - Google AI Studio Configs
## Gemini - Google AI Studio Configs

To use Gemini through Google AI Studio when running the OpenHands Docker image, you'll need to set the following environment variables using `-e`:
When running OpenHands, you'll need to set the following in the OpenHands UI through the Settings:
* `LLM Provider` to `Gemini`
* `LLM Model` to the model you will be using.
If the model is not in the list, toggle `Advanced Options`, and enter it in `Custom Model` (i.e. gemini/&lt;model-name&gt;).
* `API Key`

```
GEMINI_API_KEY="<your-google-api-key>"
LLM_MODEL="gemini/gemini-1.5-pro"
```
## VertexAI - Google Cloud Platform Configs

### Vertex AI - Google Cloud Platform Configs

To use Vertex AI through Google Cloud Platform when running the OpenHands Docker image, you'll need to set the following environment variables using `-e`:
To use Vertex AI through Google Cloud Platform when running OpenHands, you'll need to set the following environment
variables using `-e` in the [docker run command](/modules/usage/getting-started#installation):

```
GOOGLE_APPLICATION_CREDENTIALS="<json-dump-of-gcp-service-account-json>"
VERTEXAI_PROJECT="<your-gcp-project-id>"
VERTEXAI_LOCATION="<your-gcp-location>"
LLM_MODEL="vertex_ai/<desired-llm-model>"
```

Then set the following in the OpenHands UI through the Settings:
* `LLM Provider` to `VertexAI`
* `LLM Model` to the model you will be using.
If the model is not in the list, toggle `Advanced Options`, and enter it in `Custom Model` (i.e. vertex_ai/&lt;model-name&gt;).
17 changes: 10 additions & 7 deletions docs/modules/usage/llms/llms.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,22 +24,25 @@ also encourage you to open a PR to share your setup process to help others using
For a full list of the providers and models available, please consult the
[litellm documentation](https://docs.litellm.ai/docs/providers).

## Local and Open Source Models

:::note
Most current local and open source models are not as powerful. When using such models, you may see long
wait times between messages, poor responses, or errors about malformed JSON. OpenHands can only be as powerful as the
models driving it. However, if you do find ones that work, please add them to the verified list above.
:::

## LLM Configuration

The `LLM_MODEL` environment variable controls which model is used in programmatic interactions.
But when using the OpenHands UI, you'll need to choose your model in the settings window.
The following can be set in the OpenHands UI through the Settings:
* `LLM Provider`
* `LLM Model`
* `API Key`
* `Base URL` (through `Advanced Settings`)

The following environment variables might be necessary for some LLMs/providers:
There are some settings that may be necessary for some LLMs/providers that cannot be set through the UI. Instead, these
can be set through environment variables passed to the [docker run command](/modules/usage/getting-started#installation)
using `-e`:

* `LLM_API_KEY`
* `LLM_API_VERSION`
* `LLM_BASE_URL`
* `LLM_EMBEDDING_MODEL`
* `LLM_EMBEDDING_DEPLOYMENT_NAME`
* `LLM_DROP_PARAMS`
Expand Down
21 changes: 7 additions & 14 deletions docs/modules/usage/llms/openai-llms.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,16 @@
# OpenAI

OpenHands uses [LiteLLM](https://www.litellm.ai/) to make calls to OpenAI's chat models. You can find their full documentation on OpenAI chat calls [here](https://docs.litellm.ai/docs/providers/openai).
OpenHands uses LiteLLM to make calls to OpenAI's chat models. You can find their full documentation on OpenAI chat calls [here](https://docs.litellm.ai/docs/providers/openai).

## Configuration

When running the OpenHands Docker image, you'll need to choose a model and set your API key in the OpenHands UI through the Settings.

To see a full list of OpenAI models that LiteLLM supports, please visit https://docs.litellm.ai/docs/providers/openai#openai-chat-completion-models.

To find or create your OpenAI Project API Key, please visit https://platform.openai.com/api-keys.
When running OpenHands, you'll need to set the following in the OpenHands UI through the Settings:
* `LLM Provider` to `OpenAI`
* `LLM Model` to the model you will be using.
[Visit **here** to see a full list of OpenAI models that LiteLLM supports.](https://docs.litellm.ai/docs/providers/openai#openai-chat-completion-models)
If the model is not in the list, toggle `Advanced Options`, and enter it in `Custom Model` (i.e. openai/&lt;model-name&gt;).
* `API Key`. To find or create your OpenAI Project API Key, [see **here**](https://platform.openai.com/api-keys).

## Using OpenAI-Compatible Endpoints

Just as for OpenAI Chat completions, we use LiteLLM for OpenAI-compatible endpoints. You can find their full documentation on this topic [here](https://docs.litellm.ai/docs/providers/openai_compatible).

When running the OpenHands Docker image, you'll need to set the following environment variables using `-e`:

```sh
LLM_BASE_URL="<api-base-url>" # e.g. "http://0.0.0.0:3000"
```

Then set your model and API key in the OpenHands UI through the Settings.
31 changes: 4 additions & 27 deletions docs/modules/usage/troubleshooting/troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,13 +9,15 @@ We'll try to make the install process easier, but for now you can look for your
If you find more information or a workaround for one of these issues, please open a *PR* to add details to this file.

:::tip
If you're running on Windows and having trouble, check out our [Notes for Windows and WSL users](troubleshooting/windows).
OpenHands only supports Windows via [WSL](https://learn.microsoft.com/en-us/windows/wsl/install).
Please be sure to run all commands inside your WSL terminal.
Check out [Notes for WSL on Windows Users](troubleshooting/windows) for some troubleshooting guides.
:::

## Common Issues

* [Unable to connect to Docker](#unable-to-connect-to-docker)
* [Unable to connect to SSH box](#unable-to-connect-to-ssh-box)
* [Unable to connect to LLM](#unable-to-connect-to-llm)
* [404 Resource not found](#404-resource-not-found)
* [`make build` getting stuck on package installations](#make-build-getting-stuck-on-package-installations)
* [Sessions are not restored](#sessions-are-not-restored)
Expand Down Expand Up @@ -45,31 +47,6 @@ OpenHands uses a Docker container to do its work safely, without potentially bre
* If you are on a Mac, check the [permissions requirements](https://docs.docker.com/desktop/mac/permission-requirements/) and in particular consider enabling the `Allow the default Docker socket to be used` under `Settings > Advanced` in Docker Desktop.
* In addition, upgrade your Docker to the latest version under `Check for Updates`

---
### Unable to connect to SSH box

[GitHub Issue](https://github.com/All-Hands-AI/OpenHands/issues/1156)

**Symptoms**

```python
self.shell = DockerSSHBox(
...
pexpect.pxssh.ExceptionPxssh: Could not establish connection to host
```

**Details**

By default, OpenHands connects to a running container using SSH. On some machines,
especially Windows, this seems to fail.

**Workarounds**

* Restart your computer (sometimes it does work)
* Be sure to have the latest versions of WSL and Docker
* Check that your distribution in WSL is up to date as well
* Try [this reinstallation guide](https://github.com/All-Hands-AI/OpenHands/issues/1156#issuecomment-2064549427)

---
### Unable to connect to LLM

Expand Down
2 changes: 1 addition & 1 deletion docs/modules/usage/troubleshooting/windows.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Notes for Windows and WSL Users
# Notes for WSL on Windows Users

OpenHands only supports Windows via [WSL](https://learn.microsoft.com/en-us/windows/wsl/install).
Please be sure to run all commands inside your WSL terminal.
Expand Down
Binary file modified docs/static/img/settings-screenshot.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading