I have read in several places that for streaming audio you need to enqueue at least 2 buffers, at least when using the SLBufferQueueItf or SLAndroidSimpleBufferQueueItf. Using 2 buffers seems to be the most common method. One buffer being played, while another gets filled up with new data. That makes perfect sense in theory.
In the native-audio sample project in the NDK, there is this comment:
"for streaming playback we would typically enqueue at least 2 buffers to start"
Does that mean that playback doesn't start until 2 buffers have been enqueued?
I tried enqueing 2 buffers, by calling Enqueue(....) twice consecutively, but the bufferqueue callback gets called before I can enqueue the second time.
If I need to actually queue up 2 buffers to start playing, at what point can I start filling up the third buffer? The third buffer is supposed to be reusing the first buffer I enqueued.
I have my SLDataLocator set up like so:
SLDataLocator_AndroidSimpleBufferQueue loc_bufq = {SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE, 2};
In practice it seems like the process goes like this: Engueue first buffer -> when callback comes (which is instantly) queue up second buffer -> wait for second callback to enqueue third buffer
I don't see how enqueing 2 buffers is even possible, aside from enqueing one then another in the callback. Also, I am using the native buffer size for my device.
Also, I don't think "enqueing" is a word.
Edit: Some info at this commit for 4.4: https://android.googlesource.com/platform/frameworks/wilhelm/+/92e53bc98cd938e9917fb02d3e5a9be88423791d%5E!/ Which I just found in this useful SO post: Low-latency audio playback on Android