14

I am working on a musical app with an arpeggio/sequencing feature that requires great timing accuracy. Currently, using `Timer' I have achieved an accuracy with an average jitter of ~5ms, but a max jitter of ~11ms, which is unacceptable for fast arpeggios of 8th, 16th notes & 32nd notes especially.

I've read the 'CADisplayLink' is more accurate than 'Timer', but since it is limited to 1/60th of a second for it's accuracy (~16-17ms), it seems like it would be a less accurate approach than what I've achieved with Timer.

Would diving into CoreAudio be the only way to achieve what I want? Is there some other way to achieve more accurate timing?

PlateReverb
  • 654
  • 1
  • 7
  • 21
  • 1
    See [Technical Note TN2169](https://developer.apple.com/library/content/technotes/tn2169/_index.html) – Rob Jul 15 '17 at 16:48
  • Try to use NSTimeInterval https://stackoverflow.com/questions/29583263/measuring-time-accurately-in-swift-for-comparison-across-devices – Yurii Petrov Jul 15 '17 at 16:51
  • 1
    @PlateReverb - No, I haven't see anything in particular, but it certainly should be possible. I'd suggest giving it a go and posting if you have any problems. BTW, before you go into this, you might want to experiment with GCD timer (https://stackoverflow.com/a/39581096/1271826), using a queue with the highest priority and a zero leeway – Rob Jul 15 '17 at 17:10
  • 1
    Having said all of that, I might suggest diving into the CoreAudio API to see if it can do what you want. I wonder if they've already solved your problem... – Rob Jul 15 '17 at 17:31
  • 1
    One final observation, but you didn't say macOS or iOS, but if the latter, I'd make sure to do your tests on a physical device. And when benchmarking, make sure you use a release build with optimizations turned on. – Rob Jul 16 '17 at 04:49
  • 1
    The iPhone is a media playback device, with iOS thus optimized. On an iPhone 7, a foreground app can get reliable Audio Unit buffer callbacks of 16 samples, which is less than a 1/3 mS max buffer jitter. If you mix in your percussion samples at an exact sample count offset (at a 48k sample rate), your jitter is down to less than 2.1 uS. Thus, my preference for using Core Audio directly. – hotpaw2 Jul 16 '17 at 16:46
  • To what real-time output is your rhythmic timing relative? – hotpaw2 Jul 16 '17 at 23:05
  • 1
    Your jitter source might be from violating one of the 4 rules for any real-time periodic timer code. See: http://atastypixel.com/blog/four-common-mistakes-in-audio-development/#four-rules – hotpaw2 Jul 17 '17 at 14:30
  • @hotpaw2 I apologize for the very late reply. I focused on other parts of my app this past year, but I'm ready to solve this timing issue with Core Audio or AVFoundation. Everything else in my app (sequencer code in Swift & synthesis engines in libpd/pure data) is finished. Since Core Audio is aimed for doing everything, and all I need is a reliable timer (I assume) to solve my sequencer lag & jitter issues... could you give me more specific advice on how to use Core Audio or AVFoundation to generate a reliable clock for my app's sequencer? Thank you! – PlateReverb May 03 '18 at 21:36
  • 1
    Generating a clock isn't reliably accurate due to uP threading and dispatch issues. Core audio works on a pull model, so you might need sequenced and synthesized buffers of raw PCM samples available (slightly) ahead of time so they can be pulled (copied to mixer inputs) by millisecond real-time callbacks. – hotpaw2 May 04 '18 at 01:00

3 Answers3

10

I did some testing of Timer and DispatchSourceTimer (aka GCD timer) on iPhone 7 with 1000 data points with an interval of 0.05 seconds. I was expecting GCD timer to be appreciably more accurate (given that it had a dedicated queue), but I found that they were comparable, with standard deviation of my various trials ranging from 0.2-0.8 milliseconds and maximum deviation from the mean of about 2-8 milliseconds.

When trying mach_wait_until as outlined in Technical Note TN2169: High Precision Timers in iOS / OS X, I achieved a timer that was roughly 4 times as accurate than what I achieved with either Timer or GCD timers.

Having said that, I'm not entirely confident of the mach_wait_until is the best approach, as the determination of the specific policy values for thread_policy_set seem to be poorly documented. But the code below reflects the values I used in my tests, using code adapted from How to set realtime thread in Swift? and TN2169:

var timebaseInfo = mach_timebase_info_data_t()

func configureThread() {
    mach_timebase_info(&timebaseInfo)
    let clock2abs = Double(timebaseInfo.denom) / Double(timebaseInfo.numer) * Double(NSEC_PER_SEC)

    let period      = UInt32(0.00 * clock2abs)
    let computation = UInt32(0.03 * clock2abs) // 30 ms of work
    let constraint  = UInt32(0.05 * clock2abs)

    let THREAD_TIME_CONSTRAINT_POLICY_COUNT = mach_msg_type_number_t(MemoryLayout<thread_time_constraint_policy>.size / MemoryLayout<integer_t>.size)

    var policy = thread_time_constraint_policy()
    var ret: Int32
    let thread: thread_port_t = pthread_mach_thread_np(pthread_self())

    policy.period = period
    policy.computation = computation
    policy.constraint = constraint
    policy.preemptible = 0

    ret = withUnsafeMutablePointer(to: &policy) {
        $0.withMemoryRebound(to: integer_t.self, capacity: Int(THREAD_TIME_CONSTRAINT_POLICY_COUNT)) {
            thread_policy_set(thread, UInt32(THREAD_TIME_CONSTRAINT_POLICY), $0, THREAD_TIME_CONSTRAINT_POLICY_COUNT)
        }
    }

    if ret != KERN_SUCCESS {
        mach_error("thread_policy_set:", ret)
        exit(1)
    }
}

I then could do:

private func nanosToAbs(_ nanos: UInt64) -> UInt64 {
    return nanos * UInt64(timebaseInfo.denom) / UInt64(timebaseInfo.numer)
}

private func startMachTimer() {
    Thread.detachNewThread {
        autoreleasepool {
            self.configureThread()

            var when = mach_absolute_time()
            for _ in 0 ..< maxCount {
                when += self.nanosToAbs(UInt64(0.05 * Double(NSEC_PER_SEC)))
                mach_wait_until(when)

                // do something
            }
        }
    }
}

Note, you might want to see if when hasn't already passed (you want to make sure that your timers don't get backlogged if your processing can't be completed in the allotted time), but hopefully this illustrates the idea.

Anyway, with mach_wait_until, I achieved greater fidelity than Timer or GCD timers, at the cost of CPU/power consumption as described in What are the do's and dont's of code running with high precision timers?

I appreciate your skepticism on this final point, but I suspect it would be prudent to dive into CoreAudio and see if it might offer a more robust solution.

Rob
  • 415,655
  • 72
  • 787
  • 1,044
  • I'm not sure if I can easily quantify that for you. It's likely to be partially a function of what policy values you use, anyway. – Rob Jul 16 '17 at 05:04
  • There's an example in [TN2169: Which timing API(s) should I use?](https://developer.apple.com/library/content/technotes/tn2169/_index.html#//apple_ref/doc/uid/DTS40013172-CH1-TNTAG8000). I've revised my question with Swift example, too. – Rob Jul 16 '17 at 15:35
  • Any notes on how to stop the timer? I want my timer to run indefinitely (using it for a metronome), so I'm setting `maxCount` fairly high. What I'm doing now is checking `if (metronome.isOn) { // do something } else { break }` – Adam M Thompson Oct 03 '17 at 00:38
  • 1
    Where it says "do something", you could presumably just `return` if some criterion is met (e.g. check some status property of what have you). Or replace that `for` loop with `while` loop and check your property there... – Rob Oct 03 '17 at 00:46
9

For acceptable musically accurate rhythms, the only suitable timing source is using Core Audio or AVFoundation.

hotpaw2
  • 70,107
  • 14
  • 90
  • 153
  • 5
    This is the correct answer. No matter how accurate your timers are, they're still firing on an unrelated thread that will inevitably be ahead or behind the audio thread. – dave234 Jul 16 '17 at 06:59
  • @dave234 Could you say if either Core Audio or AVFoundation would offer more accurate timing? Are they equally "low level" and precise? – PlateReverb May 02 '18 at 16:27
  • 1
    They aren’t equally low level. AVFoundation is almost certainly built on top of Core Audio. I’d say they are equally precise though for most contexts. You have to think in terms of “scheduling” audio, which both APIs support. – dave234 May 02 '18 at 18:19
  • 1
    I've only used Core Audio for sub-millisecond sample-accurate rhythm generation. But there appear to be newer AVFoundation APIs that allow this as well. – hotpaw2 May 02 '18 at 20:22
  • Any advice on what Core Audio APIs to look at for reliable timing? Since I have all my audio generated through libpd/Pure Data, I just need my sequencer Swift code to trigger notes with sample accurate timing (<1ms lag & jitter). I've been reading through my Core Audio book, and googling for similar examples, but I'm still pretty lost as to how to approach/solve this problem with Core Audio... – PlateReverb May 03 '18 at 22:45
  • 5
    It appears that Swift currently can not be triggered with sub-millisecond accurate timing. In a WWDC 2017 session, Apple said not to use Swift for audio hard real-time code. I use memcpy (which is a C lib routine). – hotpaw2 May 04 '18 at 01:06
1

I'm working on a sequencer App myself, and I would defiantly recommend using AudioKit for those purposes. It has a its own sequencer class. https://audiokit.io/

Ben Spector
  • 319
  • 2
  • 9