I'm generating speech through Google Cloud's text-to-speech API and I'd like to highlight words as they are spoken.
Is there a way of getting timestamps for spoken words or sentences?
I'm generating speech through Google Cloud's text-to-speech API and I'd like to highlight words as they are spoken.
Is there a way of getting timestamps for spoken words or sentences?
You can do this using SSML and v1beta1 version of Google Cloud's text-to-speech API: https://cloud.google.com/text-to-speech/docs/reference/rest/v1beta1/text/synthesize#TimepointType
<mark>
SSML tags to the point in the text that you want a timestamp for (maybe at the end of each sentence).SSML_MARK
. If this field is not set, timepoints are not returned by default.Google's text-to-speech API supports this in the v1beta1
release, at the time of writing.
In Python (as an example) you will need to change the import from:
from google.cloud import texttospeech as tts
to:
from google.cloud import texttospeech_v1beta1 as tts
You must use SSML, not plain text, and use <mark>
's in the XML.
The synthesis request needs the enable_time_pointing
flag to be set. In Python this looks like:
response = client.synthesize_speech(
request=tts.SynthesizeSpeechRequest(
...
enable_time_pointing=[
tts.SynthesizeSpeechRequest.TimepointType.SSML_MARK]
)
)
For a runnable example, see my answer on this question.
This question seems to have gotten quite popular so I thought I'd share what I ended up doing. This method will probably only work with English or similar languages.
I first split text on any punctuation that causes a break in speaking. Each "sentence" is converted to speech separately. The resulting audio files have a seemingly random amount of silence at the end which needs to be removed before joining them, this can be done with the FFmpeg silencedetect
filter. You can then join the audio files with an appropriate gap. Approximate word timestamps can be linearly interpolated within the sentences.