1

Problem: Wav file loads and is processed by AudioDispatcher, but no sound plays.

First, the permissions:

public void checkPermissions() {
    if (PackageManager.PERMISSION_GRANTED != ContextCompat.checkSelfPermission(this.requireContext(), Manifest.permission.RECORD_AUDIO)) {
        //When permission is not granted by user, show them message why this permission is needed.
        if (ActivityCompat.shouldShowRequestPermissionRationale(this.requireActivity(), Manifest.permission.RECORD_AUDIO)) {
            Toast.makeText(this.getContext(), "Please grant permissions to record audio", Toast.LENGTH_LONG).show();
            //Give user option to still opt-in the permissions
        }

        ActivityCompat.requestPermissions(this.requireActivity(), new String[]{Manifest.permission.RECORD_AUDIO}, MY_PERMISSIONS_RECORD_AUDIO);
        launchProfile();
    }
    //If permission is granted, then proceed
    else if (ContextCompat.checkSelfPermission(this.requireContext(), Manifest.permission.RECORD_AUDIO) == PackageManager.PERMISSION_GRANTED) {
        launchProfile();
    }
}

Then the launchProfile() function:

public void launchProfile() {
        AudioMethods.test(getActivity().getApplicationContext());
        //Other fragments load after this that actually do things with the audio file, but 
        //I want to get this working before anything else runs.
}

Then the AudioMethods.test function:

public static void test(Context context){
    String fileName = "audio-samples/samplefile.wav";
    try{
        releaseStaticDispatcher(dispatcher);

        TarsosDSPAudioFormat tarsosDSPAudioFormat = new TarsosDSPAudioFormat(TarsosDSPAudioFormat.Encoding.PCM_SIGNED,
                22050,
                16, //based on the screenshot from Audacity, should this be 32?
                1,
                2,
                22050,
                ByteOrder.BIG_ENDIAN.equals(ByteOrder.nativeOrder()));

        AssetManager assetManager = context.getAssets();
        AssetFileDescriptor fileDescriptor = assetManager.openFd(fileName);

        InputStream stream = fileDescriptor.createInputStream();
        dispatcher = new AudioDispatcher(new UniversalAudioInputStream(stream, tarsosDSPAudioFormat),1024,512);
        
        //Not playing sound for some reason...
        final AudioProcessor playerProcessor = new AndroidAudioPlayer(tarsosDSPAudioFormat, 22050, AudioManager.STREAM_MUSIC);
        dispatcher.addAudioProcessor(playerProcessor);


        dispatcher.run();

        Thread audioThread = new Thread(dispatcher, "Test Audio Thread");
        audioThread.start();

    } catch (Exception e) {
        e.printStackTrace();
    }
}

Console output. No errors, just the warning:

W/AudioTrack: Use of stream types is deprecated for operations other than volume control
See the documentation of AudioTrack() for what to use instead with android.media.AudioAttributes to qualify your playback use case
D/AudioTrack: stop(38): called with 12288 frames delivered

Because the AudioTrack is delivering frames, and there aren't any runtime errors, I'm assuming I'm just missing something dumb by either not having sufficient permissions or I've missed something in setting up my AndroidAudioPlayer. I got the 22050 number by opening the file in Audacity and looking at the stats there:

enter image description here

Any help is appreciated! Thanks :)

sacredfaith
  • 850
  • 1
  • 8
  • 22

1 Answers1

1

Okay, I figured this out.

I'll address my questions as the appeared originally:

TarsosDSPAudioFormat tarsosDSPAudioFormat = new TarsosDSPAudioFormat(TarsosDSPAudioFormat.Encoding.PCM_SIGNED,
            22050,
            16, //based on the screenshot from Audacity, should this be 32?
            1,
            2,
            22050,
            ByteOrder.BIG_ENDIAN.equals(ByteOrder.nativeOrder()));

ANS: No. Per the following TarsosDSP AndroidAudioPlayer header (copied below), I'm limited to 16:

 /**
 * Constructs a new AndroidAudioPlayer from an audio format, default buffer size and stream type.
 *
 * @param audioFormat The audio format of the stream that this AndroidAudioPlayer will process.
 *                    This can only be 1 channel, PCM 16 bit.
 * @param bufferSizeInSamples  The requested buffer size in samples.
 * @param streamType  The type of audio stream that the internal AudioTrack should use. For
 *                    example, {@link AudioManager#STREAM_MUSIC}.
 * @throws IllegalArgumentException if audioFormat is not valid or if the requested buffer size is invalid.
 * @see AudioTrack
 */

The following modifications needed to be made to the test() method (this worked for me):

 public static void test(Context context){
    String fileName = "audio-samples/samplefile.wav";
    try{
        releaseStaticDispatcher(dispatcher);

        TarsosDSPAudioFormat tarsosDSPAudioFormat = new TarsosDSPAudioFormat(TarsosDSPAudioFormat.Encoding.PCM_SIGNED,
                22050,
                16,
                1,
                2,
                22050,
                ByteOrder.BIG_ENDIAN.equals(ByteOrder.nativeOrder()));

        AssetManager assetManager = context.getAssets();
        AssetFileDescriptor fileDescriptor = assetManager.openFd(fileName);
        FileInputStream stream = fileDescriptor.createInputStream();  
        
        dispatcher = new AudioDispatcher(new UniversalAudioInputStream(stream, tarsosDSPAudioFormat),2048,1024);     //2048 corresponds to the buffer size in samples, 1024 is the buffer overlap and should just be half of the 'buffer size in samples' number (so...1024)
        AudioProcessor playerProcessor = new customAudioPlayer(tarsosDSPAudioFormat, 2048); //again, 2048 is the buffer size in samples
        
        dispatcher.addAudioProcessor(playerProcessor);
        dispatcher.run();
        Thread audioThread = new Thread(dispatcher, "Test Audio Thread");
        audioThread.start();

    } catch (Exception e) {
        e.printStackTrace();
    }
}

You'll notice I now create a 'customAudioPlayer', which is, in reality copy-pasted straight from TarsosDSP AndroidAudioPlayer with two small adjustments:

  1. I hardcoded the stream type in the AudioAttributes .Builder() method so am no longer passing them in.

  2. I'm using the AudioTrack.Builder() method because using stream types for playback was deprecated. Admittedly, I'm not sure if this was the change that fixed it, or if it was the change to the buffer size (or both?).

     /*
      * Constructs a new AndroidAudioPlayer from an audio format, default buffer size and stream type.
      *
      * @param audioFormat The audio format of the stream that this AndroidAudioPlayer will process.
      *                    This can only be 1 channel, PCM 16 bit.
      * @param bufferSizeInSamples  The requested buffer size in samples.
      * @throws IllegalArgumentException if audioFormat is not valid or if the requested buffer size is invalid.
      * @see AudioTrack
      */
      public customAudioPlayer(TarsosDSPAudioFormat audioFormat, int bufferSizeInSamples) {
         if (audioFormat.getChannels() != 1) {
             throw new IllegalArgumentException("TarsosDSP only supports mono audio channel count: " + audioFormat.getChannels());
         }
    
         // The requested sample rate
         int sampleRate = (int) audioFormat.getSampleRate();
    
         //The buffer size in bytes is twice the buffer size expressed in samples if 16bit samples are used:
         int bufferSizeInBytes = bufferSizeInSamples * audioFormat.getSampleSizeInBits()/8;
    
         // From the Android API about getMinBufferSize():
         // The total size (in bytes) of the internal buffer where audio data is read from for playback.
         // If track's creation mode is MODE_STREAM, you can write data into this buffer in chunks less than or equal to this size,
         // and it is typical to use chunks of 1/2 of the total size to permit double-buffering. If the track's creation mode is MODE_STATIC,
         // this is the maximum length sample, or audio clip, that can be played by this instance. See getMinBufferSize(int, int, int) to determine
         // the minimum required buffer size for the successful creation of an AudioTrack instance in streaming mode. Using values smaller
         // than getMinBufferSize() will result in an initialization failure.
         int minBufferSizeInBytes = AudioTrack.getMinBufferSize(sampleRate, AudioFormat.CHANNEL_OUT_MONO,  AudioFormat.ENCODING_PCM_16BIT);
         if(minBufferSizeInBytes > bufferSizeInBytes){
             throw new IllegalArgumentException("The buffer size should be at least " + (minBufferSizeInBytes/(audioFormat.getSampleSizeInBits()/8)) + " (samples) according to  AudioTrack.getMinBufferSize().");
         }
    
         //http://developer.android.com/reference/android/media/AudioTrack.html#AudioTrack(int, int, int, int, int, int)
         //audioTrack = new AudioTrack(streamType, sampleRate, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT, bufferSizeInBytes,AudioTrack.MODE_STREAM);
         try {
             audioTrack = new AudioTrack.Builder()
                     .setAudioAttributes(new AudioAttributes.Builder()
                             .setUsage(AudioAttributes.USAGE_MEDIA)
                             .setContentType(AudioAttributes.CONTENT_TYPE_MUSIC)
                             .build())
                     .setAudioFormat(new AudioFormat.Builder()
                             .setEncoding(AudioFormat.ENCODING_PCM_16BIT)
                             .setSampleRate(sampleRate)
                             .setChannelMask(AudioFormat.CHANNEL_OUT_MONO)
                             .build())
                     .setBufferSizeInBytes(bufferSizeInBytes)
                     .build();
    
             audioTrack.play();
         } catch (Exception e) {
             e.printStackTrace();
         }
     }
    

Also, on my device I noticed that the volume control rocker switches just control the ringer volume by default. I had to open an audio menu (three little dots once the ringer volume was 'active') to turn up the media volume.

sacredfaith
  • 850
  • 1
  • 8
  • 22