3

I'm trying to record audio and do speech recognition at the same time. Each of them works separately, but together only the recording works.

The code looks like that:

private SpeechRecognizer sr;
private MediaRecorder recorder;

private void startRecording() throws IOException {
    recorder = new MediaRecorder();
    recorder.setAudioSource(MediaRecorder.AudioSource.MIC);
    recorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
    recorder.setOutputFile("/dev/null");
    recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);

    recorder.prepare();
    recorder.start();
}

private void startRecognition() {
    intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
    intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,
            RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
    intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, Locale.getDefault());
    intent.putExtra(RecognizerIntent.EXTRA_MAX_RESULTS, 5);
    intent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE, getPackageName());

    sr = SpeechRecognizer.createSpeechRecognizer(this);
    sr.setRecognitionListener(this);
    sr.startListening(intent);
}

When both methods are called, the onReadyForSpeech callback is called but nothing comes through. When only startRecognition() is called, speech recognition works fine.

I'm guessing it's because the speech recognizer is also using the buffer from the the microphone, but I wonder how this issue can be worked around?

EDIT: I'm not looking to use cloud API or any other non-offline API (as suggested in other similar question). Also, taking the FLAC approach may lose the ability to get partial transcription results. I'm still looking at using, but would prefer a more standard non-jni alternative if possible.

Zohar Etzioni
  • 691
  • 5
  • 14
  • https://stackoverflow.com/questions/7160741/android-speech-recognizing-and-audio-recording-in-the-same-time similar faq... https://stackoverflow.com/questions/23047433/record-save-audio-from-voice-recognition-intent/ – Robert Rowntree Oct 29 '17 at 15:53
  • in short , IMO , u still need to share the audioBuffer with something like an array copy that presents separate streams for your 2 consumers (recognizer, audioRecorder ) there are sample git libs that do this... – Robert Rowntree Oct 29 '17 at 15:56
  • Possible duplicate of [Android speech recognizing and audio recording in the same time](https://stackoverflow.com/questions/7160741/android-speech-recognizing-and-audio-recording-in-the-same-time) – Nikolay Shmyrev Oct 30 '17 at 07:19
  • DId you find the solution? – Roman Soviak Jan 09 '19 at 12:14
  • I know this is an old post.. but did you find the solution? Im working on this right now and can't find anything – Jacob Metcalf Aug 19 '22 at 18:56
  • is there any solution for this particular problem? – AlkanV Jul 11 '23 at 12:52

0 Answers0