-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Directing Web Speech API audio to a specific output device? #102
Comments
I've critiziced the current Web Speech API for being too tightly coupled to microphone and default speaker output. I suggest the Web Speech WG work to plug into existing audio sources and sinks in the platform through MediaStreamTrack (there's a precedent in web audio). Output selection would then fall out for free. E.g. audioElement.srcObject = speechSynthesis.createMediaStreamDestination();
audioElement.setSinkId(await navigator.mediaDevices.selectAudioOutput({deviceId}));
speechSynthesis.speak(new SpeechSynthesisUtterance("Hello world!")); |
Neither this specification nor Media Capture and Streams define capture of devices other than microphone input. The suggested code is currently impossible at Chromium. Chromium refuses to support listing of or capture of monitor devices at Linux https://bugs.chromium.org/p/chromium/issues/detail?id=931749. Have filed multiple specification and implementation issues to support what this issue requests; in brief see w3c/mediacapture-main#720; w3c/mediacapture-main#720. To capture output of To capture output of |
Web Speech API does not define any speech synthesis algorithms and neither Chromium nor Firefox are shipped with a speech synthesis engine. Web Speech API establishes a socket connection to Speech Dispatcher Web Speech API does not currently specify any means to capture audio output from Since Web Speech API simply sommunicates with locally installed speech synthesis engine, one approach would be no use Web Speech API at all. Rather, install one or more speech synthesis engines locally and communicate with the engine directly. For example, the output from |
@josephrocca There is not a simple way to get the direct output from a speech synthesis engine other than calling the engine directly and processing raw audio output. Technically, a socket connection can be established to No specification, including Media Capture and Streams, Audio Output Devices API, Web Audio API, or Web SPeech API (see MediaStream, ArrayBuffer, Blob audio result from speak() for recording?, https://github.com/WebAudio/web-audio-api-v2/issues/10#issuecomment-682259080) defines a means to access or capture speech synthesis engine output directly.
Aside from more elaborate solutions that involve growing You can use any language for a server. Here we use speak.php
Using
Using
|
Hello! Have there been any discussions around giving developers the ability to direct speech generated via the Web Speech API
SpeechSynthesis
interface to a specific audio output? I've not been able to find any, and it seems like a fairly important feature.The text was updated successfully, but these errors were encountered: