Is there any way to utilize Apple's dictation voice to text abilities in native Apple application?
1 Answers
Your question is a little vague, it would be good to know what you have tried using or doing first, or eve what you are trying to achieve.
More commonly found is keyword recognition API. But a speech recognition API that can be used for this is Open Ears. Along with that is Ceed Vocal.
The first is free (Open Ears), but apparently Ceed Vocal give better results.
EDIT
If you want a speech recognition API for OSX, just use the NSSpeechRecognizer class.
The NSSpeechRecognizer class is the Cocoa interface to Speech Recognition on OS X. Speech Recognition is architected as a “command and control” voice recognition system. It uses a finite state grammar and listens for phrases in that grammar. When it recognizes a phrase, it notifies the client process. This architecture is different from that used to support dictation.
It is fully configurable to your needs.

- 3,592
- 1
- 29
- 50
-
I removed the second question. What I meant by use results are feed them into automator or output to the console. – ControlAltDelete Sep 04 '13 at 00:03
-
What console? What type of app are you making? iOS or OSX? – CaptJak Sep 04 '13 at 00:18
-
So I have a couple options. I have a binary file written for linux in C that I will run on a Mac with a few modifications (not surprising there). I do need voice to text to work on the mac and possible return something to the command line with results to feed into this binary. For example call the voice to text module, it listens for input, and writes the resulting text to system.out, which I can parse. That is why I was wondering if you can launch the diction software from the console and have the results output to the system console. – ControlAltDelete Sep 04 '13 at 10:24
-
@Jib, I thought you were talking about iOS. I edited my answer for OSX speech recognition. – CaptJak Sep 04 '13 at 13:48