I want my website to get audio input from a user who's talking into their mic, and then output what they said onto the screen. I have implemented speech recognition into my react website, but I am unable to see the transcript when I speak into my mic.
Here is my code related to speech recognition I have
This is in my main App.js file inside a class called App, this is part of what's being rendered:
<p>Microphone: {Dictaphone.listening ? 'on' : 'off'}</p>
<button onClick={SpeechRecognition.startListening}>Start</button>
<button onClick={SpeechRecognition.stopListening}>Stop</button>
<button onClick={Dictaphone.resetTranscript}>Reset</button>
<p>{Dictaphone.transcript}</p>
This is in a separate file called Dictaphone.jsx, I had to make a separate file because I couldn't use hooks inside the App class:
import { useSpeechRecognition } from 'react-speech-recognition';
const Dictaphone = () => {
const {
transcript,
listening,
resetTranscript,
browserSupportsSpeechRecognition
} = useSpeechRecognition();
return null;
};
export default Dictaphone
Does anyone know why the transcript of what is being said into the mic doesn't appear on the screen?
Also, when I click the "start" button on the website, my mic turns on (a mic icon pops up on my PC), but the "off" doesn't change to "on", from this piece of code: <p>Microphone: {Dictaphone.listening ? 'on' : 'off'}</p>
. Does anyone know why it doesn't?
CodePudding user response:
There is two problem:
1.) Dictaphone is not actually a React component (simplifying, a function become a React component if you call them inside the render of App or a child of App) Es:
const App = () => {
return <>
<Dictaphone />
</>
} // <- NOW, React understand that Dictaphone is a component
2.) Even if it is a component, you cant access a private attribute from another.
Then, you can try this code:
const YourParent = () => {
const dictaphone = {...useSpeechRecognition()};
return <>
<p>Microphone: {dictaphone.listening ? 'on' : 'off'}</p>
<button onClick={SpeechRecognition.startListening}>Start</button>
<button onClick={SpeechRecognition.stopListening}>Stop</button>
<button onClick={Dictaphone.resetTranscript}>Reset</button>
<p>{dictaphone.transcript}</p>
</>
}
EDIT: Ok, if you need to use two different file, i think the only way is to use a context.
In the dictaphone files:
import { useSpeechRecognition } from 'react-speech-recognition';
const DictaphoneContext = React.createContext();
export const Dictaphone = ({children}) => {
const dictaphone = {...useSpeechRecognition()};
return <DictaphoneContext.Provider value={dictaphone}>
{children}
</DictaphoneContext.Provider>
};
export const DictaphoneListening = ({}) => {
const dictaphone = React.useContext(DictaphoneContext);
return <><dictaphone.listening ? 'on' : 'off'}</>
};
export const DictaphoneTranscript = ({}) => {
const dictaphone = React.useContext(DictaphoneContext);
return <>{dictaphone.transcript}</>
};
In the main file:
return <>
<Dictaphone>
<p>Microphone: <DictaphoneListening /></p>
<button onClick={SpeechRecognition.startListening}>Start</button>
<button onClick={SpeechRecognition.stopListening}>Stop</button>
<button onClick={Dictaphone.resetTranscript}>Reset</button>
<p><DictaphoneTranscript /></p>
</Dictaphone>
</>