Skip to content

Commit

Permalink
updates for links + redirects
Browse files Browse the repository at this point in the history
  • Loading branch information
fern-support committed Dec 20, 2024
1 parent 43944c2 commit 30b0f8b
Show file tree
Hide file tree
Showing 16 changed files with 119 additions and 88 deletions.
4 changes: 2 additions & 2 deletions fern/api-reference/reducing-latency.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ slug: api-reference/reducing-latency

Our cutting-edge Eleven v2.5 Flash Model is ideally suited for tasks demanding extremely low latency. The new flash model_id is `eleven_flash_v2_5`.

## 2. Use the [streaming API](/api-reference/streaming)
## 2. Use the [streaming API](/docs/api-reference/streaming)

ElevenLabs provides three text-to-speech endpoints:

Expand All @@ -22,7 +22,7 @@ ElevenLabs provides three text-to-speech endpoints:

The regular endpoint renders the audio file before returning it in the response. The streaming endpoint streams back the audio as it is being generated, resulting in much lower response time from request to first byte of audio received. For applications that require low latency, the streaming endpoint is therefore recommended.

## 3. Use the [input streaming Websocket](/api-reference/websockets)
## 3. Use the [input streaming Websocket](/docs/api-reference/websockets)

For applications where the text prompts can be streamed to the text-to-speech endpoints (such as LLM output), this allows for prompts to be fed to the endpoint while the speech is being generated. You can also configure the streaming chunk size when using the websocket, with smaller chunks generally rendering faster. As such, we recommend sending content word by word, our model and tooling leverages context to ensure that sentence structure and more are persisted to the generated audio even if we only receive a word at a time.

Expand Down
6 changes: 3 additions & 3 deletions fern/api-reference/text-to-speech.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -148,11 +148,11 @@ To use any of these languages, simply provide the input text in your language of
<Card
title="Streaming API"
icon="regular book-open-cover"
href="/api-reference/streaming"
href="/docs/api-reference/streaming"
>
Dig into the details of using the ElevenLabs TTS API.
</Card>
<Card title="Websockets" icon="regular comments" href="/api-reference/websockets">
<Card title="Websockets" icon="regular comments" href="/docs/api-reference/websockets">
Learn how to use our API with websockets.
</Card>
<Card
Expand All @@ -165,7 +165,7 @@ To use any of these languages, simply provide the input text in your language of
<Card
title="Integration Guides"
icon="regular rectangle-pro"
href="/developer-guides/how-to-use-tts-with-streaming"
href="/docs/developer-guides/how-to-use-tts-with-streaming"
>
Learn how to integrate ElevenLabs into your workflow.
</Card>
Expand Down
6 changes: 3 additions & 3 deletions fern/api-reference/websockets.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ However, it may not be the best choice when:
* The entire input text is available upfront. Given that the generations are partial, some buffering is involved, which could potentially result in slightly higher latency compared to a standard HTTP request.
* You want to quickly experiment or prototype. Working with Websockets can be harder and more complex than using a standard HTTP API, which might slow down rapid development and testing.

In these cases, use the [Text to Speech API](/api-reference/text-to-speech) instead.
In these cases, use the [Text to Speech API](/docs/api-reference/text-to-speech) instead.

# Protocol

Expand Down Expand Up @@ -209,13 +209,13 @@ The server will always respond with a message containing the following fields:
## Path parameters

<ParamField path="voice_id" type="string">
Voice ID to be used, you can use [Get Voices](/api-reference/get-voices) to list all the available voices.
Voice ID to be used, you can use [Get Voices](/docs/api-reference/get-voices) to list all the available voices.
</ParamField>

## Query parameters

<ParamField query="model_id" type="string">
Identifier of the model that will be used, you can query them using [Get Models](/api-reference/get-models).
Identifier of the model that will be used, you can query them using [Get Models](/docs/api-reference/get-models).
</ParamField>
<ParamField query="language_code" type="string">
Language code (ISO 639-1) used to enforce a language for the model. Currently only our v2.5 Flash & Turbo v2.5 models support language enforcement. For other models, an error will be returned if language code is provided.
Expand Down
4 changes: 2 additions & 2 deletions fern/conversational-ai/api-reference/websocket.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -240,5 +240,5 @@ To ensure smooth conversations, implement these strategies:

## Additional Resources

- [ElevenLabs Conversational AI Documentation](https://elevenlabs.io/docs/conversational-ai/overview)
- [ElevenLabs Conversational AI SDKs](https://elevenlabs.io/docs/conversational-ai/client-sdk)
- [ElevenLabs Conversational AI Documentation](/docs/conversational-ai/overview)
- [ElevenLabs Conversational AI SDKs](/docs/conversational-ai/client-sdk)
2 changes: 1 addition & 1 deletion fern/developer-guides/how-to-dub-a-video.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -317,7 +317,7 @@ With this guide and the accompanying code structure, you now have a basic setup

Remember to always follow the best practices when dealing with API keys and sensitive data, and consult the ElevenLabs API documentation for more advanced features and options. Happy dubbing!

For additional information on dubbing capabilities, translation services, and available languages, please refer to the [ElevenLabs API documentation](https://elevenlabs.docs.buildwithfern.com/docs/developers/api-reference/dubbing/dub-a-video-or-an-audio-file).
For additional information on dubbing capabilities, translation services, and available languages, please refer to the [ElevenLabs API documentation](/docs/api-reference/dubbing/dub-a-video-or-an-audio-file).

Should you encounter any issues or have questions, our [GitHub Issues page](https://github.com/elevenlabs/elevenlabs-docs/issues) is open for your queries and feedback.

Expand Down
143 changes: 80 additions & 63 deletions fern/docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,19 @@ navigation:
path: product/voices/voice-lab/instant-voice-cloning.mdx
- page: Professional Voice Cloning
path: product/voices/voice-lab/professional-voice-cloning.mdx
- section: Scripts
hidden: true
contents:
- page: Audiobook
path: product/voices/voice-lab/scripts/the-great-gatsby.mdx
- page: News Article
path: product/voices/voice-lab/scripts/news-article.mdx
- page: Social Media
path: product/voices/voice-lab/scripts/social-media.mdx
- page: Meditation
path: product/voices/voice-lab/scripts/meditation.mdx
- page: Elearning
path: product/voices/voice-lab/scripts/elearning.mdx
- section: Voice Library
contents:
- page: Overview
Expand Down Expand Up @@ -493,7 +506,7 @@ redirects:
- source: /docs/api-reference/add-chapter
destination: /docs/api-reference/chapters/add-chapter
- source: /docs/api-reference/add-project
destination: /docs/api-reference/projects/add
destination: /docs/api-reference/projects/add-project
- source: /docs/api-reference/add-shared-voice
destination: /docs/api-reference/voice-library/add-sharing-voice
- source: /docs/api-reference/add-voice
Expand Down Expand Up @@ -664,70 +677,74 @@ redirects:
destination: /docs/conversational-ai/api-reference/phone-numbers/create-phone-number
- source: /docs/conversational-ai/api-reference/post-conversational-ai-widget-avatar
destination: /docs/conversational-ai/api-reference/agents/post-agent-avatar
- source: "/api-reference/how-to-use-tts-with-streaming-in-python"
destination: "/developer-guides/how-to-use-tts-with-streaming"
- source: "/api-reference/reducing-latency"
destination: "/developer-guides/reducing-latency"
- source: "/overview"
destination: "/product/introduction"
- source: "/projects/:slug*"
destination: "/product/projects/:slug*"
- source: "/sound-effects/:slug*"
destination: "/product/sound-effects/:slug*"
- source: "/speech-synthesis/:slug*"
destination: "/product/speech-synthesis/:slug*"
- source: "/troubleshooting/:slug*"
destination: "/product/troubleshooting/:slug*"
- source: "/voiceover-studio/:slug*"
destination: "/product/voiceover-studio/:slug*"
- source: "/voices/:slug*"
destination: "/product/voices/:slug*"
- source: "/workspace/:slug*"
destination: "/product/workspace/:slug*"
- source: "/audio-native/:slug*"
destination: "/product/audio-native/:slug*"
- source: "/dubbing/:slug*"
destination: "/product/dubbing/:slug*"
- source: "/guides/:slug*"
destination: "/product/guides/:slug*"
- source: "/introduction"
destination: "/product/introduction"
- source: "/api-reference/how-to-dub-a-video"
destination: "/developer-guides/how-to-dub-a-video"
- source: "/api-reference/how-to-use-pronounciation-dictionaries"
destination: "/developer-guides/how-to-use-pronounciation-dictionaries"
- source: "/api-reference/how-to-use-request-stitching"
destination: "/developer-guides/how-to-use-request-stitching"
- source: "/api-reference/how-to-use-text-to-sound-effects"
destination: "/developer-guides/how-to-use-text-to-sound-effects"
- source: "/api-reference/how-to-use-tts-with-streaming"
destination: "/developer-guides/how-to-use-tts-with-streaming"
- source: "/api-reference/how-to-use-websocket"
destination: "/developer-guides/how-to-use-websocket"
- source: "/api-reference/integrating-with-twilio"
destination: "/developer-guides/integrating-with-twilio"
- source: "/api-reference/specifying-server-location"
destination: "/developer-guides/specifying-server-location"
- source: "/conversational-ai"
destination: "/conversational-ai/docs"
- source: "/product/conversational-ai/overview"
destination: "/conversational-ai/docs/introduction"
- source: "/libraries/conversational-ai-sdk-python"
destination: "/conversational-ai/libraries/conversational-ai-sdk-python"
- source: "/libraries/conversational-ai-sdk-js"
destination: "/conversational-ai/libraries/conversational-ai-sdk-js"
- source: "/libraries/conversational-ai-sdk-react"
destination: "/conversational-ai/libraries/conversational-ai-sdk-react"
- source: "/libraries/conversational-ai-sdk-swift"
destination: "/conversational-ai/libraries/conversational-ai-sdk-swift"
- source: "/developer-guides/conversational-ai-guide"
destination: "/conversational-ai/guides/conversational-ai-guide"
- source: "/product/conversational-ai/tools"
destination: "/conversational-ai/customization/tools"
- source: "/product/conversational-ai/*"
destination: "/conversational-ai/docs"
- source: "/docs/api-reference/how-to-use-tts-with-streaming-in-python"
destination: "/docs/developer-guides/how-to-use-tts-with-streaming"
- source: "/docs/api-reference/reducing-latency"
destination: "/docs/developer-guides/reducing-latency"
- source: "/docs/overview"
destination: "/docs/product/introduction"
- source: "/docs/projects/:slug*"
destination: "/docs/product/projects/:slug*"
- source: "/docs/sound-effects/:slug*"
destination: "/docs/product/sound-effects/:slug*"
- source: "/docs/speech-synthesis/:slug*"
destination: "/docs/product/speech-synthesis/:slug*"
- source: "/docs/troubleshooting/:slug*"
destination: "/docs/product/troubleshooting/:slug*"
- source: "/docs/voiceover-studio/:slug*"
destination: "/docs/product/voiceover-studio/:slug*"
- source: "/docs/voices/:slug*"
destination: "/docs/product/voices/:slug*"
- source: "/docs/workspace/:slug*"
destination: "/docs/product/workspace/:slug*"
- source: "/docs/audio-native/:slug*"
destination: "/docs/product/audio-native/:slug*"
- source: "/docs/dubbing/:slug*"
destination: "/docs/product/dubbing/:slug*"
- source: "/docs/guides/:slug*"
destination: "/docs/product/guides/:slug*"
- source: "/docs/introduction"
destination: "/docs/product/introduction"
- source: "/docs/api-reference/how-to-dub-a-video"
destination: "/docs/developer-guides/how-to-dub-a-video"
- source: "/docs/api-reference/how-to-use-pronounciation-dictionaries"
destination: "/docs/developer-guides/how-to-use-pronounciation-dictionaries"
- source: "/docs/api-reference/how-to-use-request-stitching"
destination: "/docs/developer-guides/how-to-use-request-stitching"
- source: "/docs/api-reference/how-to-use-text-to-sound-effects"
destination: "/docs/developer-guides/how-to-use-text-to-sound-effects"
- source: "/docs/api-reference/how-to-use-tts-with-streaming"
destination: "/docs/developer-guides/how-to-use-tts-with-streaming"
- source: "/docs/api-reference/how-to-use-websocket"
destination: "/docs/developer-guides/how-to-use-websocket"
- source: "/docs/api-reference/integrating-with-twilio"
destination: "/docs/developer-guides/integrating-with-twilio"
- source: "/docs/api-reference/specifying-server-location"
destination: "/docs/developer-guides/specifying-server-location"
- source: "/docs/conversational-ai"
destination: "/docs/conversational-ai/docs"
- source: "/docs/product/conversational-ai/overview"
destination: "/docs/conversational-ai/docs/introduction"
- source: "/docs/libraries/conversational-ai-sdk-python"
destination: "/docs/conversational-ai/libraries/conversational-ai-sdk-python"
- source: "/docs/libraries/conversational-ai-sdk-js"
destination: "/docs/conversational-ai/libraries/conversational-ai-sdk-js"
- source: "/docs/libraries/conversational-ai-sdk-react"
destination: "/docs/conversational-ai/libraries/conversational-ai-sdk-react"
- source: "/docs/libraries/conversational-ai-sdk-swift"
destination: "/docs/conversational-ai/libraries/conversational-ai-sdk-swift"
- source: "/docs/developer-guides/conversational-ai-guide"
destination: "/docs/conversational-ai/guides/conversational-ai-guide"
- source: "/docs/product/conversational-ai/tools"
destination: "/docs/conversational-ai/customization/tools"
- source: "/docs/product/conversational-ai/*"
destination: "/docs/conversational-ai/docs"
- source: "/docs/api-reference/text-to-speech/text-to-speech"
destination: "/docs/api-reference/text-to-speech/convert"
- source: "/docs/conversational-ai"
destination: "/docs/conversational-ai/docs/introduction"
- source: "/docs/conversational-ai/client-sdk"
destination: "/docs/conversational-ai/libraries/conversational-ai-sdk-python"

analytics:
posthog:
Expand Down
4 changes: 2 additions & 2 deletions fern/product/guides/speech-synthesis.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Let’s touch on models and voice settings briefly before generating our audio c

### Models

More detailed information about the models is available [here](product/speech-synthesis/models).
More detailed information about the models is available [here](/docs/product/speech-synthesis/models).

- **Multilingual v2 (default)**: Supports 28 languages, known for its accuracy and stability, especially when using high-quality samples.
- **Flash v2.5**: Generates speech in 32 languages with low latency, ideal for real-time applications.
Expand All @@ -22,7 +22,7 @@ More detailed information about the models is available [here](product/speech-sy

### Voice Settings

More detailed information about the voice settings is available [here](product/speech-synthesis/voice-settings).
More detailed information about the voice settings is available [here](/docs/product/speech-synthesis/voice-settings).

- **Stability**: Adjusts the emotional range and consistency of the voice. Lower settings result in more variation and emotion, while higher settings produce a more stable, monotone voice.
- **Similarity**: Controls how closely the AI matches the original voice. High settings may replicate artifacts from low-quality audio.
Expand Down
2 changes: 1 addition & 1 deletion fern/product/guides/voiceover-studio.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,4 @@ Similar to the Dubbing Studio, the new **Voiceover Studio** offers users the cha

**Exercise**: Upload your own video and follow along with Mike on how to use the Voiceover Studio [here](https://www.youtube.com/watch?v=Ka6ljU1MULc).

For more information, visit the [ElevenLabs Voiceover Studio overview](product/voiceover-studio/overview).
For more information, visit the [ElevenLabs Voiceover Studio overview](/docs/product/voiceover-studio/overview).
2 changes: 1 addition & 1 deletion fern/product/introduction.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -295,7 +295,7 @@ Finally, we will cover our Conversation AI platform, which provides an easy setu
title="Full Documentation"
icon="regular book"
iconPosition="left"
href="/docs/conversational-ai"
href="/docs/conversational-ai/docs/introduction"
/>

<Card
Expand Down
4 changes: 2 additions & 2 deletions fern/product/voices/voice-lab/scripts/conversation.mdx
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: ""
description: ""
title: "Conversation Script"
slug: /product/voices/voice-lab/scripts/conversation
---


Expand Down
6 changes: 5 additions & 1 deletion fern/product/voices/voice-lab/scripts/elearning.mdx
Original file line number Diff line number Diff line change
@@ -1,4 +1,8 @@
## Elearning Script
---
title: "E-learning"
slug: /product/voices/voice-lab/scripts/elearning
---


### Mathematics

Expand Down
6 changes: 5 additions & 1 deletion fern/product/voices/voice-lab/scripts/meditation.mdx
Original file line number Diff line number Diff line change
@@ -1,4 +1,8 @@
## Meditation Script
---
title: "Meditation"
slug: /product/voices/voice-lab/scripts/meditation
---

### My Reflections

It's been a long road, a journey filled with triumphs and tribulations, but here I am, facing my own reflection. There was a time when my music touched the hearts of millions when my voice soared and my name was on everyone's lips. But now, all that remains are the echoes of my past.
Expand Down
5 changes: 4 additions & 1 deletion fern/product/voices/voice-lab/scripts/news-article.mdx
Original file line number Diff line number Diff line change
@@ -1,4 +1,7 @@
## News Articles Script
---
title: "News Articles"
slug: /product/voices/voice-lab/scripts/news-article
---

### ARTICLES

Expand Down
4 changes: 2 additions & 2 deletions fern/product/voices/voice-lab/scripts/radio-ad.mdx
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: ""
description: ""
title: "Radio Ad"
slug: /product/voices/voice-lab/scripts/radio-ad
---

“Are you ready to power through your day with a smile? Introducing Blissful Bites Protein Bars, the ultimate fuel for your body and your happiness!”
Expand Down
5 changes: 4 additions & 1 deletion fern/product/voices/voice-lab/scripts/social-media.mdx
Original file line number Diff line number Diff line change
@@ -1,4 +1,7 @@
## Social Media Script
---
title: "Social Media"
slug: /product/voices/voice-lab/scripts/social-media
---

### Tutorials

Expand Down
4 changes: 2 additions & 2 deletions fern/product/voices/voice-lab/scripts/the-great-gatsby.mdx
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: ""
description: ""
title: "The Great Gatsby"
slug: /product/voices/voice-lab/scripts/the-great-gatsby
---


Expand Down

0 comments on commit 30b0f8b

Please sign in to comment.