From a6a7409ac8f227a660ab9622356030662b729438 Mon Sep 17 00:00:00 2001 From: Eric Zhu Date: Wed, 20 Mar 2024 17:10:42 -0700 Subject: [PATCH] Fix link in non-openai model doc (#2106) * Fix link in non-openai model doc * Update about-using-nonopenai-models.md --- .../non-openai-models/about-using-nonopenai-models.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/website/docs/topics/non-openai-models/about-using-nonopenai-models.md b/website/docs/topics/non-openai-models/about-using-nonopenai-models.md index c9ddc1b3988..e202679f29e 100644 --- a/website/docs/topics/non-openai-models/about-using-nonopenai-models.md +++ b/website/docs/topics/non-openai-models/about-using-nonopenai-models.md @@ -32,7 +32,7 @@ authentication which is usually handled through an API key. Examples of using cloud-based proxy servers providers that have an OpenAI-compatible API are provided below: -- [together.ai example](cloud-togetherai) +- [together.ai example](/docs/topics/non-openai-models/cloud-togetherai) ### Locally run proxy servers @@ -46,9 +46,9 @@ OpenAI-compatible API, running them in AutoGen is straightforward. Examples of using locally run proxy servers that have an OpenAI-compatible API are provided below: -- [LiteLLM with Ollama example](local-litellm-ollama) -- [LM Studio](local-lm-studio) -- [vLLM example](local-vllm) +- [LiteLLM with Ollama example](/docs/topics/non-openai-models/local-litellm-ollama) +- [LM Studio](/docs/topics/non-openai-models/local-lm-studio) +- [vLLM example](/docs/topics/non-openai-models/local-vllm) ````mdx-code-block :::tip