diff --git a/docs/modules/usage/llms/llms.md b/docs/modules/usage/llms/llms.md index d4ceb021a0..8edbe78579 100644 --- a/docs/modules/usage/llms/llms.md +++ b/docs/modules/usage/llms/llms.md @@ -26,8 +26,10 @@ The following environment variables might be necessary for some LLMs/providers: We have a few guides for running OpenHands with specific model providers: +- [OpenAI](llms/openai-llms) - [ollama](llms/local-llms) - [Azure](llms/azure-llms) +- [Google](llms/google-llms) If you're using another provider, we encourage you to open a PR to share your setup! diff --git a/docs/modules/usage/llms/openai-llms.md b/docs/modules/usage/llms/openai-llms.md new file mode 100644 index 0000000000..e8bf74320a --- /dev/null +++ b/docs/modules/usage/llms/openai-llms.md @@ -0,0 +1,75 @@ +# OpenAI + +OpenHands uses [LiteLLM](https://www.litellm.ai/) to make calls to OpenAI's chat models. You can find their full documentation on OpenAI chat calls [here](https://docs.litellm.ai/docs/providers/openai). + +## Configuration + +### Manual Configuration + +When running the OpenHands Docker image, you'll need to set the following environment variables: + +```sh +LLM_MODEL="openai/" # e.g. "openai/gpt-4o" +LLM_API_KEY="" +``` + +To see a full list of OpenAI models that LiteLLM supports, please visit https://docs.litellm.ai/docs/providers/openai#openai-chat-completion-models. + +To find or create your OpenAI Project API Key, please visit https://platform.openai.com/api-keys. + +**Example**: + +```sh +export WORKSPACE_BASE=$(pwd)/workspace + +docker run -it \ + --pull=always \ + -e SANDBOX_USER_ID=$(id -u) \ + -e LLM_MODEL="openai/" \ + -e LLM_API_KEY="" \ + -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \ + -v $WORKSPACE_BASE:/opt/workspace_base \ + -v /var/run/docker.sock:/var/run/docker.sock \ + -p 3000:3000 \ + --add-host host.docker.internal:host-gateway \ + --name openhands-app-$(date +%Y%m%d%H%M%S) \ + ghcr.io/opendevin/opendevin:0.8 +``` + +### UI Configuration + +You can also directly set the `LLM_MODEL` and `LLM_API_KEY` in the OpenHands client itself. Follow this guide to get up and running with the OpenHands client. + +From there, you can set your model and API key in the settings window. + +## Using OpenAI-Compatible Endpoints + +Just as for OpenAI Chat completions, we use LiteLLM for OpenAI-compatible endpoints. You can find their full documentation on this topic [here](https://docs.litellm.ai/docs/providers/openai_compatible). + +When running the OpenHands Docker image, you'll need to set the following environment variables: + +```sh +LLM_BASE_URL="" # e.g. "http://0.0.0.0:3000" +LLM_MODEL="openai/" # e.g. "openai/mistral" +LLM_API_KEY="" +``` + +**Example**: + +```sh +export WORKSPACE_BASE=$(pwd)/workspace + +docker run -it \ + --pull=always \ + -e SANDBOX_USER_ID=$(id -u) \ + -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \ + -e LLM_BASE_URL="" \ + -e LLM_MODEL="openai/" \ + -e LLM_API_KEY="" \ + -v $WORKSPACE_BASE:/opt/workspace_base \ + -v /var/run/docker.sock:/var/run/docker.sock \ + -p 3000:3000 \ + --add-host host.docker.internal:host-gateway \ + --name openhands-app-$(date +%Y%m%d%H%M%S) \ + ghcr.io/opendevin/opendevin:0.8 +```