From 30b0f8bbe9cc61f250df9e43146837740d72ff96 Mon Sep 17 00:00:00 2001 From: fern-support Date: Fri, 20 Dec 2024 10:37:12 -0500 Subject: [PATCH] updates for links + redirects --- fern/api-reference/reducing-latency.mdx | 4 +- fern/api-reference/text-to-speech.mdx | 6 +- fern/api-reference/websockets.mdx | 6 +- .../api-reference/websocket.mdx | 4 +- fern/developer-guides/how-to-dub-a-video.mdx | 2 +- fern/docs.yml | 143 ++++++++++-------- fern/product/guides/speech-synthesis.mdx | 4 +- fern/product/guides/voiceover-studio.mdx | 2 +- fern/product/introduction.mdx | 2 +- .../voices/voice-lab/scripts/conversation.mdx | 4 +- .../voices/voice-lab/scripts/elearning.mdx | 6 +- .../voices/voice-lab/scripts/meditation.mdx | 6 +- .../voices/voice-lab/scripts/news-article.mdx | 5 +- .../voices/voice-lab/scripts/radio-ad.mdx | 4 +- .../voices/voice-lab/scripts/social-media.mdx | 5 +- .../voice-lab/scripts/the-great-gatsby.mdx | 4 +- 16 files changed, 119 insertions(+), 88 deletions(-) diff --git a/fern/api-reference/reducing-latency.mdx b/fern/api-reference/reducing-latency.mdx index 69e48e64..0080be3f 100644 --- a/fern/api-reference/reducing-latency.mdx +++ b/fern/api-reference/reducing-latency.mdx @@ -12,7 +12,7 @@ slug: api-reference/reducing-latency Our cutting-edge Eleven v2.5 Flash Model is ideally suited for tasks demanding extremely low latency. The new flash model_id is `eleven_flash_v2_5`. -## 2. Use the [streaming API](/api-reference/streaming) +## 2. Use the [streaming API](/docs/api-reference/streaming) ElevenLabs provides three text-to-speech endpoints: @@ -22,7 +22,7 @@ ElevenLabs provides three text-to-speech endpoints: The regular endpoint renders the audio file before returning it in the response. The streaming endpoint streams back the audio as it is being generated, resulting in much lower response time from request to first byte of audio received. For applications that require low latency, the streaming endpoint is therefore recommended. -## 3. Use the [input streaming Websocket](/api-reference/websockets) +## 3. Use the [input streaming Websocket](/docs/api-reference/websockets) For applications where the text prompts can be streamed to the text-to-speech endpoints (such as LLM output), this allows for prompts to be fed to the endpoint while the speech is being generated. You can also configure the streaming chunk size when using the websocket, with smaller chunks generally rendering faster. As such, we recommend sending content word by word, our model and tooling leverages context to ensure that sentence structure and more are persisted to the generated audio even if we only receive a word at a time. diff --git a/fern/api-reference/text-to-speech.mdx b/fern/api-reference/text-to-speech.mdx index 08533b51..e2c0e0f4 100644 --- a/fern/api-reference/text-to-speech.mdx +++ b/fern/api-reference/text-to-speech.mdx @@ -148,11 +148,11 @@ To use any of these languages, simply provide the input text in your language of Dig into the details of using the ElevenLabs TTS API. - + Learn how to use our API with websockets. Learn how to integrate ElevenLabs into your workflow. diff --git a/fern/api-reference/websockets.mdx b/fern/api-reference/websockets.mdx index e0d6c1ef..1086a38b 100644 --- a/fern/api-reference/websockets.mdx +++ b/fern/api-reference/websockets.mdx @@ -24,7 +24,7 @@ However, it may not be the best choice when: * The entire input text is available upfront. Given that the generations are partial, some buffering is involved, which could potentially result in slightly higher latency compared to a standard HTTP request. * You want to quickly experiment or prototype. Working with Websockets can be harder and more complex than using a standard HTTP API, which might slow down rapid development and testing. -In these cases, use the [Text to Speech API](/api-reference/text-to-speech) instead. +In these cases, use the [Text to Speech API](/docs/api-reference/text-to-speech) instead. # Protocol @@ -209,13 +209,13 @@ The server will always respond with a message containing the following fields: ## Path parameters - Voice ID to be used, you can use [Get Voices](/api-reference/get-voices) to list all the available voices. + Voice ID to be used, you can use [Get Voices](/docs/api-reference/get-voices) to list all the available voices. ## Query parameters - Identifier of the model that will be used, you can query them using [Get Models](/api-reference/get-models). + Identifier of the model that will be used, you can query them using [Get Models](/docs/api-reference/get-models). Language code (ISO 639-1) used to enforce a language for the model. Currently only our v2.5 Flash & Turbo v2.5 models support language enforcement. For other models, an error will be returned if language code is provided. diff --git a/fern/conversational-ai/api-reference/websocket.mdx b/fern/conversational-ai/api-reference/websocket.mdx index cd0b813f..1ede53ee 100644 --- a/fern/conversational-ai/api-reference/websocket.mdx +++ b/fern/conversational-ai/api-reference/websocket.mdx @@ -240,5 +240,5 @@ To ensure smooth conversations, implement these strategies: ## Additional Resources -- [ElevenLabs Conversational AI Documentation](https://elevenlabs.io/docs/conversational-ai/overview) -- [ElevenLabs Conversational AI SDKs](https://elevenlabs.io/docs/conversational-ai/client-sdk) +- [ElevenLabs Conversational AI Documentation](/docs/conversational-ai/overview) +- [ElevenLabs Conversational AI SDKs](/docs/conversational-ai/client-sdk) diff --git a/fern/developer-guides/how-to-dub-a-video.mdx b/fern/developer-guides/how-to-dub-a-video.mdx index 74034834..dfef450a 100644 --- a/fern/developer-guides/how-to-dub-a-video.mdx +++ b/fern/developer-guides/how-to-dub-a-video.mdx @@ -317,7 +317,7 @@ With this guide and the accompanying code structure, you now have a basic setup Remember to always follow the best practices when dealing with API keys and sensitive data, and consult the ElevenLabs API documentation for more advanced features and options. Happy dubbing! -For additional information on dubbing capabilities, translation services, and available languages, please refer to the [ElevenLabs API documentation](https://elevenlabs.docs.buildwithfern.com/docs/developers/api-reference/dubbing/dub-a-video-or-an-audio-file). +For additional information on dubbing capabilities, translation services, and available languages, please refer to the [ElevenLabs API documentation](/docs/api-reference/dubbing/dub-a-video-or-an-audio-file). Should you encounter any issues or have questions, our [GitHub Issues page](https://github.com/elevenlabs/elevenlabs-docs/issues) is open for your queries and feedback. diff --git a/fern/docs.yml b/fern/docs.yml index 248609ad..43590dbc 100644 --- a/fern/docs.yml +++ b/fern/docs.yml @@ -76,6 +76,19 @@ navigation: path: product/voices/voice-lab/instant-voice-cloning.mdx - page: Professional Voice Cloning path: product/voices/voice-lab/professional-voice-cloning.mdx + - section: Scripts + hidden: true + contents: + - page: Audiobook + path: product/voices/voice-lab/scripts/the-great-gatsby.mdx + - page: News Article + path: product/voices/voice-lab/scripts/news-article.mdx + - page: Social Media + path: product/voices/voice-lab/scripts/social-media.mdx + - page: Meditation + path: product/voices/voice-lab/scripts/meditation.mdx + - page: Elearning + path: product/voices/voice-lab/scripts/elearning.mdx - section: Voice Library contents: - page: Overview @@ -493,7 +506,7 @@ redirects: - source: /docs/api-reference/add-chapter destination: /docs/api-reference/chapters/add-chapter - source: /docs/api-reference/add-project - destination: /docs/api-reference/projects/add + destination: /docs/api-reference/projects/add-project - source: /docs/api-reference/add-shared-voice destination: /docs/api-reference/voice-library/add-sharing-voice - source: /docs/api-reference/add-voice @@ -664,70 +677,74 @@ redirects: destination: /docs/conversational-ai/api-reference/phone-numbers/create-phone-number - source: /docs/conversational-ai/api-reference/post-conversational-ai-widget-avatar destination: /docs/conversational-ai/api-reference/agents/post-agent-avatar - - source: "/api-reference/how-to-use-tts-with-streaming-in-python" - destination: "/developer-guides/how-to-use-tts-with-streaming" - - source: "/api-reference/reducing-latency" - destination: "/developer-guides/reducing-latency" - - source: "/overview" - destination: "/product/introduction" - - source: "/projects/:slug*" - destination: "/product/projects/:slug*" - - source: "/sound-effects/:slug*" - destination: "/product/sound-effects/:slug*" - - source: "/speech-synthesis/:slug*" - destination: "/product/speech-synthesis/:slug*" - - source: "/troubleshooting/:slug*" - destination: "/product/troubleshooting/:slug*" - - source: "/voiceover-studio/:slug*" - destination: "/product/voiceover-studio/:slug*" - - source: "/voices/:slug*" - destination: "/product/voices/:slug*" - - source: "/workspace/:slug*" - destination: "/product/workspace/:slug*" - - source: "/audio-native/:slug*" - destination: "/product/audio-native/:slug*" - - source: "/dubbing/:slug*" - destination: "/product/dubbing/:slug*" - - source: "/guides/:slug*" - destination: "/product/guides/:slug*" - - source: "/introduction" - destination: "/product/introduction" - - source: "/api-reference/how-to-dub-a-video" - destination: "/developer-guides/how-to-dub-a-video" - - source: "/api-reference/how-to-use-pronounciation-dictionaries" - destination: "/developer-guides/how-to-use-pronounciation-dictionaries" - - source: "/api-reference/how-to-use-request-stitching" - destination: "/developer-guides/how-to-use-request-stitching" - - source: "/api-reference/how-to-use-text-to-sound-effects" - destination: "/developer-guides/how-to-use-text-to-sound-effects" - - source: "/api-reference/how-to-use-tts-with-streaming" - destination: "/developer-guides/how-to-use-tts-with-streaming" - - source: "/api-reference/how-to-use-websocket" - destination: "/developer-guides/how-to-use-websocket" - - source: "/api-reference/integrating-with-twilio" - destination: "/developer-guides/integrating-with-twilio" - - source: "/api-reference/specifying-server-location" - destination: "/developer-guides/specifying-server-location" - - source: "/conversational-ai" - destination: "/conversational-ai/docs" - - source: "/product/conversational-ai/overview" - destination: "/conversational-ai/docs/introduction" - - source: "/libraries/conversational-ai-sdk-python" - destination: "/conversational-ai/libraries/conversational-ai-sdk-python" - - source: "/libraries/conversational-ai-sdk-js" - destination: "/conversational-ai/libraries/conversational-ai-sdk-js" - - source: "/libraries/conversational-ai-sdk-react" - destination: "/conversational-ai/libraries/conversational-ai-sdk-react" - - source: "/libraries/conversational-ai-sdk-swift" - destination: "/conversational-ai/libraries/conversational-ai-sdk-swift" - - source: "/developer-guides/conversational-ai-guide" - destination: "/conversational-ai/guides/conversational-ai-guide" - - source: "/product/conversational-ai/tools" - destination: "/conversational-ai/customization/tools" - - source: "/product/conversational-ai/*" - destination: "/conversational-ai/docs" + - source: "/docs/api-reference/how-to-use-tts-with-streaming-in-python" + destination: "/docs/developer-guides/how-to-use-tts-with-streaming" + - source: "/docs/api-reference/reducing-latency" + destination: "/docs/developer-guides/reducing-latency" + - source: "/docs/overview" + destination: "/docs/product/introduction" + - source: "/docs/projects/:slug*" + destination: "/docs/product/projects/:slug*" + - source: "/docs/sound-effects/:slug*" + destination: "/docs/product/sound-effects/:slug*" + - source: "/docs/speech-synthesis/:slug*" + destination: "/docs/product/speech-synthesis/:slug*" + - source: "/docs/troubleshooting/:slug*" + destination: "/docs/product/troubleshooting/:slug*" + - source: "/docs/voiceover-studio/:slug*" + destination: "/docs/product/voiceover-studio/:slug*" + - source: "/docs/voices/:slug*" + destination: "/docs/product/voices/:slug*" + - source: "/docs/workspace/:slug*" + destination: "/docs/product/workspace/:slug*" + - source: "/docs/audio-native/:slug*" + destination: "/docs/product/audio-native/:slug*" + - source: "/docs/dubbing/:slug*" + destination: "/docs/product/dubbing/:slug*" + - source: "/docs/guides/:slug*" + destination: "/docs/product/guides/:slug*" + - source: "/docs/introduction" + destination: "/docs/product/introduction" + - source: "/docs/api-reference/how-to-dub-a-video" + destination: "/docs/developer-guides/how-to-dub-a-video" + - source: "/docs/api-reference/how-to-use-pronounciation-dictionaries" + destination: "/docs/developer-guides/how-to-use-pronounciation-dictionaries" + - source: "/docs/api-reference/how-to-use-request-stitching" + destination: "/docs/developer-guides/how-to-use-request-stitching" + - source: "/docs/api-reference/how-to-use-text-to-sound-effects" + destination: "/docs/developer-guides/how-to-use-text-to-sound-effects" + - source: "/docs/api-reference/how-to-use-tts-with-streaming" + destination: "/docs/developer-guides/how-to-use-tts-with-streaming" + - source: "/docs/api-reference/how-to-use-websocket" + destination: "/docs/developer-guides/how-to-use-websocket" + - source: "/docs/api-reference/integrating-with-twilio" + destination: "/docs/developer-guides/integrating-with-twilio" + - source: "/docs/api-reference/specifying-server-location" + destination: "/docs/developer-guides/specifying-server-location" + - source: "/docs/conversational-ai" + destination: "/docs/conversational-ai/docs" + - source: "/docs/product/conversational-ai/overview" + destination: "/docs/conversational-ai/docs/introduction" + - source: "/docs/libraries/conversational-ai-sdk-python" + destination: "/docs/conversational-ai/libraries/conversational-ai-sdk-python" + - source: "/docs/libraries/conversational-ai-sdk-js" + destination: "/docs/conversational-ai/libraries/conversational-ai-sdk-js" + - source: "/docs/libraries/conversational-ai-sdk-react" + destination: "/docs/conversational-ai/libraries/conversational-ai-sdk-react" + - source: "/docs/libraries/conversational-ai-sdk-swift" + destination: "/docs/conversational-ai/libraries/conversational-ai-sdk-swift" + - source: "/docs/developer-guides/conversational-ai-guide" + destination: "/docs/conversational-ai/guides/conversational-ai-guide" + - source: "/docs/product/conversational-ai/tools" + destination: "/docs/conversational-ai/customization/tools" + - source: "/docs/product/conversational-ai/*" + destination: "/docs/conversational-ai/docs" - source: "/docs/api-reference/text-to-speech/text-to-speech" destination: "/docs/api-reference/text-to-speech/convert" + - source: "/docs/conversational-ai" + destination: "/docs/conversational-ai/docs/introduction" + - source: "/docs/conversational-ai/client-sdk" + destination: "/docs/conversational-ai/libraries/conversational-ai-sdk-python" analytics: posthog: diff --git a/fern/product/guides/speech-synthesis.mdx b/fern/product/guides/speech-synthesis.mdx index 8b6ccf15..5f2a7bc3 100644 --- a/fern/product/guides/speech-synthesis.mdx +++ b/fern/product/guides/speech-synthesis.mdx @@ -12,7 +12,7 @@ Let’s touch on models and voice settings briefly before generating our audio c ### Models -More detailed information about the models is available [here](product/speech-synthesis/models). +More detailed information about the models is available [here](/docs/product/speech-synthesis/models). - **Multilingual v2 (default)**: Supports 28 languages, known for its accuracy and stability, especially when using high-quality samples. - **Flash v2.5**: Generates speech in 32 languages with low latency, ideal for real-time applications. @@ -22,7 +22,7 @@ More detailed information about the models is available [here](product/speech-sy ### Voice Settings -More detailed information about the voice settings is available [here](product/speech-synthesis/voice-settings). +More detailed information about the voice settings is available [here](/docs/product/speech-synthesis/voice-settings). - **Stability**: Adjusts the emotional range and consistency of the voice. Lower settings result in more variation and emotion, while higher settings produce a more stable, monotone voice. - **Similarity**: Controls how closely the AI matches the original voice. High settings may replicate artifacts from low-quality audio. diff --git a/fern/product/guides/voiceover-studio.mdx b/fern/product/guides/voiceover-studio.mdx index 771ea76d..57b1e0f9 100644 --- a/fern/product/guides/voiceover-studio.mdx +++ b/fern/product/guides/voiceover-studio.mdx @@ -17,4 +17,4 @@ Similar to the Dubbing Studio, the new **Voiceover Studio** offers users the cha **Exercise**: Upload your own video and follow along with Mike on how to use the Voiceover Studio [here](https://www.youtube.com/watch?v=Ka6ljU1MULc). -For more information, visit the [ElevenLabs Voiceover Studio overview](product/voiceover-studio/overview). +For more information, visit the [ElevenLabs Voiceover Studio overview](/docs/product/voiceover-studio/overview). diff --git a/fern/product/introduction.mdx b/fern/product/introduction.mdx index e70ee073..f54db918 100644 --- a/fern/product/introduction.mdx +++ b/fern/product/introduction.mdx @@ -295,7 +295,7 @@ Finally, we will cover our Conversation AI platform, which provides an easy setu title="Full Documentation" icon="regular book" iconPosition="left" - href="/docs/conversational-ai" + href="/docs/conversational-ai/docs/introduction" />