5

I'm newly invovled developing image processing app on iOS, I have lots of experience on OpenCV, however everything is new for me on the iOS even OSX.

So I found there are mainly the core image library and the GPUImage library around for the normal image processing work. I'm insterested in knowing which one should I choose as a new on the iOS platform? I have seen some tests done on iOS 8 on iPhone 6, it appears the core image is faster than the GPUImage on the GPUImage's benchmark now.

I'm actually looking for a whole solution on image processing development,

  1. What language ? Swift, Objective-C or Clang and C++ ?
  2. What library ? GPUImage or Core Image or OpenCV or GEGL ?
  3. Is there an example app ?

My goal is to develop some advance colour correction functions, I wish to make it as fast as possible, so in future I can make the image processing become video processing without much problem.

Thanks

tomriddle_1234
  • 3,145
  • 6
  • 41
  • 71

3 Answers3

16

I'm the author of GPUImage, so you might weigh my words appropriately. I provide a lengthy description of my design thoughts on this framework vs. Core Image in my answer here, but I can restate that.

Basically, I designed GPUImage to be a convenient wrapper around OpenGL / OpenGL ES image processing. It was built at a time when Core Image didn't exist on iOS, and even when Core Image launched there it lacked custom kernels and had some performance shortcomings.

In the meantime, the Core Image team has done impressive work on performance, leading to Core Image slightly outperforming GPUImage in several areas now. I still beat them in others, but it's way closer than it used to be.

I think the decision comes down to what you value for your application. The entire source code for GPUImage is available to you, so you can customize or fix any part of it that you want. You can look behind the curtain and see how any operation runs. The flexibility in pipeline design lets me experiment with complex operations that can't currently be done in Core Image.

Core Image comes standard with iOS and OS X. It is widely used (plenty of code available), performant, easy to set up, and (as of the latest iOS versions) is extensible via custom kernels. It can do CPU-side processing in addition to GPU-acelerated processing, which lets you do things like process images in a background process (although you should be able to do limited OpenGL ES work in the background in iOS 8). I used Core Image all the time before I wrote GPUImage.

For sample applications, download the GPUImage source code and look in the examples/ directory. You'll find examples of every aspect of the framework for both Mac and iOS, as well as both Objective-C and Swift. I particularly recommend building and running the FilterShowcase example on your iOS device, as it demonstrates every filter from the framework on live video. It's a fun thing to try.

In regards to language choice, if performance is what you're after for video / image processing, language makes little difference. Your performance bottlenecks will not be due to language, but will be in shader performance on the GPU and the speed at which images and video can be uploaded to / downloaded from the GPU.

GPUImage is written in Objective-C, but it can still process video frames at 60 FPS on even the oldest iOS devices it supports. Profiling the code finds very few places where message sending overhead or memory allocation (the slowest areas in this language compared with C or C++) is even noticeable. If these operations were done on the CPU, this would be a slightly different story, but this is all GPU-driven.

Use whatever language is most appropriate and easiest for your development needs. Core Image and GPUImage are both compatible with Swift, Objective-C++, or Objective-C. OpenCV might require a shim to be used from Swift, but if you're talking performance OpenCV might not be a great choice. It will be much slower than either Core Image or GPUImage.

Personally, for ease of use it can be hard to argue with Swift, since I can write an entire video filtering application using GPUImage in only 23 lines of non-whitespace code.

Community
  • 1
  • 1
Brad Larson
  • 170,088
  • 45
  • 397
  • 571
  • I'm so glad you could anwser personally : ). I read your article about changing to swift, I guess I'll seriously consider the swift + GPUImage pipeline, – tomriddle_1234 Jan 21 '15 at 09:30
  • Can I ask do you think the [Node graph architecture](http://en.wikipedia.org/wiki/Node_graph_architecture) is an accessible way of creating a image processing mobile App? I wish to try this architecture however it is very complex, so I would like a suggestion. – tomriddle_1234 Jan 22 '15 at 03:46
  • @tomriddle_1234 - I know several applications that use such an architecture for filtering and processing video and images, some based on GPUImage. However, they were designed by large, well-funded teams, so it's not a particularly easy thing for an individual developer to implement. You might start with a simpler design and build to that if you need it. – Brad Larson Jan 22 '15 at 15:33
2

I have just open-sourced VideoShader, which allows you to describe video-processing pipeline in JSON-based scripting language.

https://github.com/snakajima/videoshader

For example, "cartoon filter" can be described in 12 lines.

{
    "title":"Cartoon I",
    "pipeline":[
        { "filter":"boxblur", "ui":{ "primary":["radius"] }, "attr":{"radius":2.0} },
        { "control":"fork" },
        { "filter":"boxblur", "attr":{"radius":2.0} },
        { "filter":"toone", "ui":{ "hidden":["weight"] } },
        { "control":"swap" },
        { "filter":"sobel" },
        { "filter":"canny_edge", "attr":{ "threshold":0.19, "thin":0.50 } },
        { "filter":"anti_alias" },
        { "blender":"alpha" }
    ]
}

It compiles this script into GLSL (OpenGL's shading language for GPU) at runtime, and all the pixel operations will be done in GPU.

jose920405
  • 7,982
  • 6
  • 45
  • 71
Satoshi Nakajima
  • 1,863
  • 18
  • 29
1

Well if you are doing some ADVANCE image processing stuff then i suggest to go with OpenGL ES(i assume i don't need to cover the benefit of OpenGL over UIKit or Core Graphics) and you can start with below tutorials.

http://www.raywenderlich.com/70208/opengl-es-pixel-shaders-tutorial https://developer.apple.com/library/ios/samplecode/GLImageProcessing/Introduction/Intro.html

Dipen Patel
  • 911
  • 2
  • 9
  • 22
  • Thanks ! Using pure OpenGL seems an overkill. I think I can use something build on top of OpenGL since I'm not using any 3D feature here. – tomriddle_1234 Jan 20 '15 at 06:09
  • You can use OpenGL+GPUImage as said in http://indieambitions.com/idevblogaday/learning-opengl-gpuimage/. OpenGL is not only for 3d. you can increase the speed of your image processing using OpenGL. – Dipen Patel Jan 20 '15 at 06:25
  • 2
    GPUImage is built on OpenGL / OpenGL ES, so it's not really an either / or. That's the whole point of the framework. It introduces negligible overhead in doing so, so there's no huge advantage in rolling your own OpenGL / OpenGL ES rendering code for image processing. – Brad Larson Jan 20 '15 at 18:04