3

I am dealing with a speech recognition task. So far, I have been using the Google Cloud Speech Recognition API (in Python) with good results. The API returns a confidence value along with every chunk of the transcribed text. The confidence is a number between 0 and 1 as stated in the docs, but I did not find any deeper explanation of how Google's API derives this number, so I assume it somehow comes from the Neural Network that does the recognition.

The next step I want to take is to make my own (offline) automatic speech recognition program, and I found that pyKaldi should be fine up to the task. I did not start programming it yet, but I want to know beforehand (for research purposes) - can Kaldi return some similar value of confidence, as does the Google Speech-to-Text API? And what really is this "confidence", and how is it computed?

Petr Krýže
  • 1,148
  • 1
  • 7
  • 14
  • 2
    Yes, out of the box Kaldi provides "confidence" value per word, but practically it is useless. Building ASR system from scratch is a very complex task. Even running Kaldi-provided examples takes considerable engineering effort and **demand** good understanding of ASR principles. And by using publicly available datasets (or models) you will not be able to achieve even close the level of accuracy _Google Speech_ provides. So if you really want to dive into it, prepare to invest months of time. – igrinis Oct 27 '19 at 08:36
  • That is true. Building your own ASR is a difficult task. Instead, explore other open source ASRs (Deep speech, CMU Sphinx, etc.) and adapt their already built with your dataset. – Sumit Jangra Jul 10 '20 at 07:30

1 Answers1

1

Yes, pyKaldi supports confidence values (word confidence score), calculated with minimum bayes risk (MBR). You will find all the necessary information in the documentation. Here is the link to the description of the module:

https://pykaldi.github.io/api/kaldi.lat.html?highlight=mbr#module-kaldi.lat.sausages

As the name says, it is a confidence value, but it is not expressing how "probable" it is that the resulting text output for a word, derived (or given, in a probabilistic setting) from a sequence of audio chunks is correct. In my opinion the expressivity or meaningfulness is a bit fuzzy and depending on the quality of the model and the training data (noise, reverb etc.). It is meaningful in comparing alternatives, telling you the one with the higher value is more likely to be the correct one. This in turn poses the problem of which distance to call a significant difference. A single confidence value does not tell you anything, nor can you compare two different recognizer models only on the basis of their confidence values. Microsoft terms it "Instead, confidence scores provide a mechanism for comparing the relative accuracy of multiple recognition alternates for a given input. This facilitates returning the most accurate recognition result."

CLpragmatics
  • 625
  • 6
  • 21