diff --git a/website/docs/topics/non-openai-models/about-using-nonopenai-models.md b/website/docs/topics/non-openai-models/about-using-nonopenai-models.md index c9ddc1b3988..e202679f29e 100644 --- a/website/docs/topics/non-openai-models/about-using-nonopenai-models.md +++ b/website/docs/topics/non-openai-models/about-using-nonopenai-models.md @@ -32,7 +32,7 @@ authentication which is usually handled through an API key. Examples of using cloud-based proxy servers providers that have an OpenAI-compatible API are provided below: -- [together.ai example](cloud-togetherai) +- [together.ai example](/docs/topics/non-openai-models/cloud-togetherai) ### Locally run proxy servers @@ -46,9 +46,9 @@ OpenAI-compatible API, running them in AutoGen is straightforward. Examples of using locally run proxy servers that have an OpenAI-compatible API are provided below: -- [LiteLLM with Ollama example](local-litellm-ollama) -- [LM Studio](local-lm-studio) -- [vLLM example](local-vllm) +- [LiteLLM with Ollama example](/docs/topics/non-openai-models/local-litellm-ollama) +- [LM Studio](/docs/topics/non-openai-models/local-lm-studio) +- [vLLM example](/docs/topics/non-openai-models/local-vllm) ````mdx-code-block :::tip