29

I am working on a college project in which I am using speech recognition. Currently I am developing it on Windows 7 and I'm using system.speech API package which comes along with .net and I am doing it on C#.

The problem I am facing is dictation recognition is not accurate enough. Then whenever I start my application the desktop speech recognition starts automatically. This is a big nuicance to me. As already the words I speak are not clear enough and conflicting recognition are interpreted as commands and actions like application switching minimize is being carried out.

This is a critical part of my app and i kindly request you to suggest any good speech API for me other than this Microsoft blunder. It will be good even if it can understand just simple dictation grammar.

ROMANIA_engineer
  • 54,432
  • 29
  • 203
  • 199
swordfish
  • 4,899
  • 5
  • 33
  • 61
  • 3
    There is no magic potion. Buy a better microphone and train the speech recognition engine. It also helps a lot to train on how you speak to it. Clear pronounciation is key. Also try to keep it in either command mode or dictation mode. It needs either a small set of commands, or to use sentence buildup to guess what words it heard. Mixing these will make it perform badly. – Tedd Hansen Mar 29 '11 at 13:40
  • Yes keeping them spearate did increase the accuracy of recognition a bit. Thanks... – swordfish Mar 30 '11 at 06:37
  • Do you use a word list ? or do you want it to recognize any word ? – Saeed A Suleiman Jun 22 '15 at 22:38

2 Answers2

36

I think desktop recognition is starting because you are using a shared desktop recognizer. You should use an inproc recognizer for your application only. you do this by instantiating a SpeechRecognitionEngine() in your application.

Since you are using the dictation grammar and the desktop windows recognizer, I believe it can be trained by the speaker to improve its accuracy. Go through the Windows 7 recognizer training and see if the accuracy improves.

To get started with .NET speech, there is a very good article that was published a few years ago at http://msdn.microsoft.com/en-us/magazine/cc163663.aspx. It is probably the best introductory article I’ve found so far. It is a little out of date, but very helfpul. (The AppendResultKeyValue method was dropped after the beta.)

Here is a quick sample that shows one of the simplest .NET windows forms app to use a dictation grammar that I could think of. This should work on Windows Vista or Windows 7. I created a form. Dropped a button on it and made the button big. Added a reference to System.Speech and the line:

using System.Speech.Recognition;

Then I added the following event handler to button1:

private void button1_Click(object sender, EventArgs e)
{         
    SpeechRecognitionEngine recognizer = new SpeechRecognitionEngine();
    Grammar dictationGrammar = new DictationGrammar();
    recognizer.LoadGrammar(dictationGrammar);
    try
    {
        button1.Text = "Speak Now";
        recognizer.SetInputToDefaultAudioDevice();
        RecognitionResult result = recognizer.Recognize();
        button1.Text = result.Text;
    }
    catch (InvalidOperationException exception)
    {
        button1.Text = String.Format("Could not recognize input from default aduio device. Is a microphone or sound card available?\r\n{0} - {1}.", exception.Source, exception.Message);
    }
    finally
    {
        recognizer.UnloadAllGrammars();
    }                          
}

A little more information comparing the various flavors of speech engines and APIs shipped by Microsoft can be found at What is the difference between System.Speech.Recognition and Microsoft.Speech.Recognition??

Community
  • 1
  • 1
Michael Levy
  • 13,097
  • 15
  • 66
  • 100
  • thank you... i still have a long way to go but the time for my demonstration is getting closer... i think id have to train my system harder and faster. thanks for your help. – swordfish Mar 30 '11 at 06:44
  • 4
    As `SpeechRecognitionEngine` is disposable, it should be in a `using` block. No doubt it uses native resources internally. – Drew Noakes Mar 07 '15 at 17:48
0

If everyone needs to use a speech recognition engine that has 90% of the accuracy of Cortana it should follow these steps.

Step 1) Download the Nugget package Microsoft.Windows.SDK.Contracts

Step 2) Migrate to the package reference the SDK --> https://devblogs.microsoft.com/nuget/migrate-packages-config-to-package-reference/

The above mentioned SDK will provide you with the windows 10 speech recognition system within Win32 apps. This has to be done because the only way to use this speech recognition engine is to build a Universal Windows Platforms application. I don't recommend making an A.I. application in the Universal Windows Platform because it has sandboxing. The sandboxing function is isolating the app in a container and it won't allow it to communicate with any hardware and it will also make file access an absolute pain and thread management isn't possible, only async functions.

Step 3) Add this namespace in the namespace section. This namespace has all the functions that are related to online speech recognition.

using Windows.Media.SpeechRecognition;

Step 4) Add the speech recognition implementation.



Task.Run(async()=>
{
  try
  {
    
    var speech = new SpeechRecognizer();
    await speech.CompileConstraintsAsync();
    SpeechRecognitionResult result = await speech.RecognizeAsync();
    TextBox1.Text = result.Text;
  }
  catch{}
});

The majority of the methods within the Windows 10 SpeechRecognizer class require to be called asynchronously and this means that you must run them within a Task.Run(async()=>{}) lambda function with an async parameter, an async method or an async Task method.

In order for this to work go to Settings -> Privacy -> Speech in the OS and check if the online speech recognition is allowed.

teodor mihail
  • 343
  • 3
  • 7