0

I'm writing a simple NDK OpenSL ES audio app that records the users touches on a virtual piano keyboard and then plays them back forever over a set loop. After much experimenting and reading, I've settled on using a separate POSIX loop to achieve this. As you can see in the code it subtracts any processing time taken from the sleep time in order to make the interval of each loop as close to the desired sleep interval as possible (in this case it's 5000000 nanoseconds.

void init_timing_loop() {
    pthread_t fade_in;
    pthread_create(&fade_in, NULL, timing_loop, (void*)NULL);
}

void* timing_loop(void* args) {

    while (1) {

        clock_gettime(CLOCK_MONOTONIC, &timing.start_time_s);

        tic_counter(); // simple logic gates that cycle the current tic
        play_all_parts(); // for-loops through all parts and plays any notes (From an OpenSL buffer) that fall on the current tic

        clock_gettime(CLOCK_MONOTONIC, &timing.finish_time_s);

        timing.diff_time_s.tv_nsec = (5000000 - (timing.finish_time_s.tv_nsec - timing.start_time_s.tv_nsec));

        nanosleep(&timing.diff_time_s, NULL);
    }

    return NULL;
}

The problem is that even using this the results are better, but quite inconsistent. sometimes notes will delay for perhaps even 50ms at a time, which makes for very wonky playback.

Is there a better way of approaching this? To debug I ran the following code:

gettimeofday(&timing.curr_time, &timing.tzp);
__android_log_print(ANDROID_LOG_DEBUG, "timing_loop", "gettimeofday: %d %d",
    timing.curr_time.tv_sec, timing.curr_time.tv_usec);

Which gives a fairly consistent readout - that doesn't reflect the playback inaccuracies whatsoever. Are there other forces at work with Android preventing accurate timing? Or is OpenSL ES a potential issue? All the buffer data is loaded into memory - could there be bottlenecks there?

Happy to post more OpenSL code if needed... but at this stage I'm trying figure out if this thread loop is accurate or if there's a better way to do it.

Michael J Petrie
  • 183
  • 4
  • 13
  • One thought: keep the audio buffer full at all times. Fill silences by "playing" an appropriate number of samples of silence. This way everything is timed through the audio clock, and variable latencies won't affect you (so long as you keep the buffers full). – fadden Jun 14 '13 at 04:19
  • Thanks for your reply, that seems like a better way to go. The question with having buffer based timing is: For polyphony I'll need to be loading up to 16 buffer queues at once. If I keep them full at all times - do you think they'll stay in sync? I'll give it a test anyway but thought I would ask in case you have experience doing so. – Michael J Petrie Jun 14 '13 at 06:35
  • [This](http://stackoverflow.com/questions/4485072/accurate-timing-in-ios?lq=1) thread touches on a similar problem but with iOS, if anyone's interested. – Michael J Petrie Jun 14 '13 at 06:36

1 Answers1

0

You should consider seconds when using clock_gettime as well, you may get greater timing.start_time_s.tv_nsec than timing.finish_time_s.tv_nsec. tv_nsec starts from zero when tv_sec is increased.

timing.diff_time_s.tv_nsec =
(5000000 - (timing.finish_time_s.tv_nsec - timing.start_time_s.tv_nsec));

try something like

#define NS_IN_SEC 1000000000
(timing.finish_time_s.tv_sec * NS_IN_SEC + timing.finish_time_s.tv_nsec) -
(timing.start_time_s.tv_nsec * NS_IN_SEC + timing.start_time_s.tv_nsec)
auselen
  • 27,577
  • 7
  • 73
  • 114
  • Thanks for that, I'll update my code. I've decided to use buffer based timing for sound playback, but will still use a rough thread for volume/panning etc. – Michael J Petrie Jun 16 '13 at 07:15