8

I've been making some progress with audio programming for iPhone. Now I'm doing some performance tuning, trying to see if I can squeeze more out of this little machine. Running Shark, I see that a significant part of my cpu power (16%) is getting eaten up by objc_msgSend. I understand I can speed this up somewhat by storing pointers to functions (IMP) rather than calling them using [object message] notation. But if I'm going to go through all this trouble, I wonder if I might just be better off using C++.

Any thoughts on this?

morgancodes
  • 25,055
  • 38
  • 135
  • 187

4 Answers4

13

Objective C is absolutely fast enough for DSP/audio programming, because Objective C is a superset of C. You don't need to (and shouldn't) make everything a message. Where performance is critical, use plain C function calls (or use inline assembly, if there are hardware features you can leverage that way). Where performance isn't critical, and your application can benefit from the features of message indirection, use the square brackets.

The Accelerate framework on OS X, for example, is a great high-performance Objective C library. It only uses standard C99 function calls, and you can call them from Objective C code without any wrapping or indirection.

Stephen Canon
  • 103,815
  • 19
  • 183
  • 269
  • 1
    Thanks Stephen. If I want to code audio in an object-oriented way, seems like Obj-C may be not help me much though. If I want modular, pluggable components and don't want to do acrobatics to call methods on my objects, seems like C++ may suit my needs better. – morgancodes May 03 '10 at 21:51
  • 6
    @morgancodes: In most high-performance codes that I've encountered, modularity happens at a relatively high-level, not at the level of each and every function call. There's nothing stopping you from having high-level modular pluggable components using Objective-C message dispatch whose core low-level implementations use C function calls. If that really isn't possible in your situation, but C++ virtual function lookup overhead *is* somehow acceptable, then I suppose you should use C++. That seems like a pretty narrow situation, however. – Stephen Canon May 03 '10 at 22:00
  • @StephenCanon *If that really isn't possible in your situation, but C++ virtual function lookup overhead is somehow acceptable, then I suppose you should use C++. That seems like a pretty narrow situation, however.* -- It's not at all 'narrow'. C++'s dynamic dispatch has a low and practically invariant complexity; i.e. it's like using function pointers. always. ObjC's dispatch must go through the ObjC runtime. This may (but often does not) result in many lock acquisitions and/or allocations. So the difference is C++ dynamic dispatch has a consistent and predictable cost in time. – justin Jun 29 '13 at 07:57
  • (cont) technically, this may have been accomplished in ObjC for up to 16 predefined selectors (iirc). also, it's not unusual to avoid throwing and `dynamic_cast`-ing in realtime rendering because their time complexity is not reliable (enough) across implementations. – justin Jun 29 '13 at 08:04
  • @justin: that's more or less exactly my point; if you're not going to use the "fancier" features of C++, then it's not really buying you anything that you can't easily do with straight C function calls (possibly through a function pointer), which are also part of the Objective-C language. – Stephen Canon Jun 29 '13 at 14:11
  • @StephenCanon the qualification i think i did not make clear enough in the comment: ObjC types/messaging is bad for *realtime high priority audio threads* (e.g. a typical render callback). if you are doing offline rendering, (potentially) blocking activities like using objc messaging are fine. in realtime, one should look to the worst case scenario for timing, whereas the average message overhead is a better metric for offline purposes when deciding whether to use C or ObjC. so i should have added "within the domain of realtime audio" someplace in that post. – justin Jun 29 '13 at 20:30
  • @StephenCanon so my point was that within realtime audio, objc messaging (and ARC ops, if one uses ARC) makes using objc a bad choice because has the potential to take a long time -- just like it is a bad idea to request heap allocations in a realtime render/pull callback. C++'s dispatch does not have this problem because it is both fast and predictable (similar to function pointers). but yes, of course, one can just use C as you suggested in order to meet these guarantees, and it will be as fast as C (because it is). – justin Jun 29 '13 at 20:41
  • @StephenCanon re C++ feature restrictions: it's a short list of C++ lang features which should be avoided in realtime render/pull contexts. the omissions are far smaller than the additions -- e.g. it still has templates/generics, classes, scalability, good type safety, the ability to define con/de-structors and copying, and it's as fast as C. so there are still mmaannyy lang additions upon C, despite the omissions. the omission list should have little to no influence when choosing C or C++; choosing the right lang should still come down to the team/project/env/tools IMO. – justin Jun 29 '13 at 21:36
4

The problem with Objective-C and functions like DSP is not speed per se but rather the uncertainty of when the inevitable bottlenecks will occur.

All languages have bottlenecks but in static linked languages like C++ you can better predict when and where in the code they will occur. In the case of Objective-C's runtime coupling, the time it takes to find the appropriate object, the time it takes to send a message is not necessary slow but it is variable and unpredictable. Objective-C's flexibility in UI, data management and reuse work against it in the case of tightly timed task.

Most audio processing in the Apple API is done in C or C++ because of the need to nail down the time it takes code to execute. However, its easy to mix Objective-C, C and C++ in the same app. This allows you to pick the best language for the immediate task at hand.

TechZen
  • 64,370
  • 15
  • 118
  • 145
3

Is Objective C fast enough for DSP/audio programming

Real Time Rendering

Definitely Not. The Objective-C runtime and its libraries are simply not designed for the demands of real time audio rendering. The fact is, it's virtually impossible to guarantee that using ObjC runtime or libraries such as Foundation (or even CoreFoundation) will not result your renderer missing its deadline.

The common case is a lock -- even a simple heap allocation (malloc, new/new[], [[NSObject alloc] init]) will likely require a lock.

To use ObjC is to utilize libraries and a runtime which assume locks are acceptable at any point within their execution. The lock can suspend execution of your render thread (e.g. during your render callback) while waiting to acquire the lock. Then you can miss your render deadline because your render thread is held up, ultimately resulting in dropouts/glitches.

Ask a pro audio plugin developer: they will tell you that blocking within the realtime render domain is forbidden. You cannot e.g. run to the filesystem or create heap allocations because you have no practical upper bound regarding the time it will take to finish.

Here's a nice introduction: http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothing

Offline Rendering

Yes, it would be acceptably fast in most scenarios for high level messaging. At the lower levels, I recommend against using ObjC because it would be wasteful -- it could take many, many times longer to render if ObjC messaging used at that level (compared to a C or C++ implementation).

See also: Will my iPhone app take a performance hit if I use Objective-C for low level code?

Community
  • 1
  • 1
justin
  • 104,054
  • 14
  • 179
  • 226
1

objc_msgSend is just a utility. The cost of sending a message is not just the cost of sending the message. It is the cost of doing everything that the message initiates. (Just like the true cost of a function call is its inclusive cost, including I/O if there is any.)

What you need to know is where are the time-dominant messages coming from and going to and why. Stack samples will tell you which routines / methods are being called so often that you should figure out how to call them more efficiently.

You may find that you're calling them more than you have to.

Especially if you find that many of the calls are for creating and deleting data structure, you can probably find better ways to do that.

Community
  • 1
  • 1
Mike Dunlavey
  • 40,059
  • 14
  • 91
  • 135
  • Thanks Mr. Dunlavey. Let me make sure I'm not confused though. When I profile my code in shark, and I see "15%" next to objc_msgSend, that looks like an opportunity for optimization to me. I believe that 15% is the actual expense of sending the message, not executing the code that the message initiates. Correct? So in a tight loop, I can indeed gain some performance by avoiding objc_msgSend, right? – morgancodes May 04 '10 at 16:33
  • @morgancodes: Don't concentrate on that routine. Here's what to do: while it is running, pause it about 20 times & record the call stack. On 3 of those (roughly), you should see it in objc_msgSend. On those 3 stacks, if you look up one level you will see the exact line(s) of code responsible for that time. While you're at it, look for other things that show up on multiple stacks & see what else you could get rid of. In my experience, code before tuning has a lot more room for improvement than 15%. – Mike Dunlavey May 04 '10 at 17:12
  • @morgancodes: Shark says it takes 10,000 samples over 10 sec. Suppose a line of code is on the stack 20% of the time, so removing it could save that overall time. If you take 20 samples, then you will see it on 20%(4) of the samples, give or take 9%(1.8). If 10,000 samples are taken, the line will show up 20% (2000) of the samples, give or take 0.4% (40). Either way, will you miss it? Problem with Shark is it gives you needless precision but doesn't give you the insight you get by looking at specific samples. – Mike Dunlavey May 04 '10 at 17:21
  • Hmmm. The debugger in XCode isn't giving me a useful call stack. I also don't get any useful profiling info out of instruments, have to use Shark. I suspect this has something to do with the fact that all of my audio code get called by a c callback from an audio unit, rather from the iPhone's main run loop. So, I'll look in to that and then try your idea. – morgancodes May 04 '10 at 17:23