I wanted to try speech recognition in NextJS 13. I installed react-speech-recognition and copy/pasted the provided example. But I am getting Error: Hydration failed because the initial UI does not match what was rendered on the server.
I tried to rollback react to v18.1
, removed .next
folder but it didn't help. I scrolled NextJS documentation about React Hydration Error, but I don't call windows
and don't put div
tag in p
.
Any ideas what can be the issue?
Code:
'use client'
import 'regenerator-runtime/runtime'
import React from 'react'
import SpeechRecognition, {
useSpeechRecognition,
} from 'react-speech-recognition'
export default function page() {
const {
transcript,
listening,
resetTranscript,
browserSupportsSpeechRecognition,
} = useSpeechRecognition()
if (!browserSupportsSpeechRecognition) {
return <span>Browser doesn't support speech recognition.</span>
}
return (
<div>
<p>Microphone: {listening ? 'on' : 'off'}</p>
<button onClick={SpeechRecognition.startListening}>Start</button>
<button onClick={SpeechRecognition.stopListening}>Stop</button>
<button onClick={resetTranscript}>Reset</button>
<p>{transcript}</p>
</div>
)
}
CodePudding user response:
The hydration error is caused by these lines:
if (!browserSupportsSpeechRecognition) {
return <span>Browser doesn't support speech recognition.</span>
}
Because you are using the 'use client'
directive, this component behaves as traditional page components on previous Next.js versions (The page is pre-rendered and then sent to the client to be hydrated). The library you are using checks if webkitSpeechRecognition
or SpeechRecognition
exists in the window
object in order to set the browserSupportsSpeechRecognition
boolean, but window
is not available server-side (it is undefined
). The condition above evaluates to true
thus creating the mismatch between what was rendered on the server and on the client-side's first render (You can view the page's source and you will notice that the not supported text was rendered on the server).
You can solve the issue using useState
and useEffect
hooks, taking advantage of the fact that useEffect
only runs on the client-side:
'use client'
import React, { useState, useEffect } from 'react'
import 'regenerator-runtime/runtime'
import SpeechRecognition, {
useSpeechRecognition
} from 'react-speech-recognition'
const Page = () => {
const [speechRecognitionSupported, setSpeechRecognitionSupported] =
useState(null) // null or boolean
const {
transcript,
listening,
resetTranscript,
browserSupportsSpeechRecognition
} = useSpeechRecognition()
useEffect(() => {
// sets to true or false after component has been mounted
setSpeechRecognitionSupported(browserSupportsSpeechRecognition)
}, [browserSupportsSpeechRecognition])
if (speechRecognitionSupported === null) return null // return null on first render, can be a loading indicator
if (!speechRecognitionSupported) {
return <span>Browser does not support speech recognition.</span>
}
return (
<div>
<p>Microphone: {listening ? 'on' : 'off'}</p>
<button onClick={SpeechRecognition.startListening}>Start</button>
<button onClick={SpeechRecognition.stopListening}>Stop</button>
<button onClick={resetTranscript}>Reset</button>
<p>{transcript}</p>
</div>
)
}
export default Page