Listening continuously and separating user phases by silence? #144
Unanswered
scpedicini
asked this question in
Q&A
Replies: 1 comment
-
FYI: I think I've got a workaround in place, just monitor whenever the const transcriptRef = useRef(interimTranscript);
useEffect(() => {
if(interimTranscript === '' && transcriptRef.current.trim().length > 0) {
const user_speech = transcriptRef.current.trim();
resetTranscript();
console.log(`useEffect - user said: ${user_speech}`);
}
transcriptRef.current = interimTranscript;
}, [interimTranscript, resetTranscript]); |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have turned on
continuous
to true and as far as I can tell it continues to update thetranscript
string with the running transcript, and works great. When I've used the webkitSpeechRecognition in the past, I've been able to subscribe toonresult
and then look for a var calledisFinal
.This would allow me to listen continuously but "divide" each user's spoken stream into separate segments. For example, if a user says:
"turn off the lights please".
If they are silent for a while, the results will show that
isFinal
is set to true along with aconfidence
value. I can then act accordingly on that transcript, but all the while we are still continuously listening, and they may then say:"report the weather"
Again followed by silence, etc.
How can I achieve this with
react-speech-recognition
?Beta Was this translation helpful? Give feedback.
All reactions