10

I have a "software renderer" that I am porting from PC to the iPhone. what is the fastest way to manually update the screen with a buffer of pixels on the iphone? for instance in windows the fastest function I have found is SetDIBitsToDevice.

I don't know much about the iphone, or the libraries, and there seem to be so many layers and different types of UI elements, so I might need a lot of explanation...

for now I'm just going to constantly update a texture in opengl and render that to the screen, I very much doubt that this is going to be the best way to do it.

UPDATE:

I have tried the openGL screen sized texture method:

I got 17fps...

I used a 512x512 texture (because it needs to be a power of two)

just the call of

glTexSubImage2D(GL_TEXTURE_2D,0,0,0,512,512,GL_RGBA,GL_UNSIGNED_BYTE, baseWindowGUI->GetBuffer());

seemed pretty much responsible for ALL the slow down.

commenting it out, and leaving in all my software rendering GUI code, and the rendering of the now non updating texture, resulted in 60fps, 30% renderer usage, and no notable spikes from the cpu.

note that GetBuffer() simply returns a pointer to the software backbuffer of the GUI system, there is no re-gigging or resizing of the buffer in anyway, it is properly sized and formatted for the texture, so I am fairly certain the slowdown has nothing to do with the software renderer, which is the good news, it looks like if I can find a way to update the screen at 60, software rendering should work for the time being.

I tried doing the update texture call with 512,320 rather than 512,512 this was oddly even slower... running at 10fps, also it says the render utilization is only like 5%, and all the time is being wasted in a call to Untwiddle32bpp inside openGLES.

I can change my software render to natively render to any pixle format, if it would result in a more direct blit.

fyi, tested on a 2.2.1 ipod touch G2 (so like an Iphone 3G on steroids)

UPDATE 2:

I have just finished writting the CoreAnimation/Graphics method, it looks good, but I am a little worried about how it updates the screen each frame, basically ditching the old CGImage, creating a brand new one... check it out in 'someRandomFunction' below: is this the quickest way to update the image? any help would be greatly appreciated.

//
//  catestAppDelegate.m
//  catest
//
//  Created by User on 3/14/10.
//  Copyright __MyCompanyName__ 2010. All rights reserved.
//




#import "catestAppDelegate.h"
#import "catestViewController.h"
#import "QuartzCore/QuartzCore.h"

const void* GetBytePointer(void* info)
{
    // this is currently only called once
    return info; // info is a pointer to the buffer
}

void ReleaseBytePointer(void*info, const void* pointer)
{
    // don't care, just using the one static buffer at the moment
}


size_t GetBytesAtPosition(void* info, void* buffer, off_t position, size_t count)
{
    // I don't think this ever gets called
    memcpy(buffer, ((char*)info) + position, count);
    return count;
}

CGDataProviderDirectCallbacks providerCallbacks =
{ 0, GetBytePointer, ReleaseBytePointer, GetBytesAtPosition, 0 };


static CGImageRef cgIm;

static CGDataProviderRef dataProvider;
unsigned char* imageData;
 const size_t imageDataSize = 320 * 480 * 4;
NSTimer *animationTimer;
NSTimeInterval animationInterval= 1.0f/60.0f;


@implementation catestAppDelegate

@synthesize window;
@synthesize viewController;


- (void)applicationDidFinishLaunching:(UIApplication *)application {    


    [window makeKeyAndVisible];


    const size_t byteRowSize = 320 * 4;
    imageData = malloc(imageDataSize);

    for(int i=0;i<imageDataSize/4;i++)
            ((unsigned int*)imageData)[i] = 0xFFFF00FF; // just set it to some random init color, currently yellow


    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    dataProvider =
    CGDataProviderCreateDirect(imageData, imageDataSize,
                               &providerCallbacks);  // currently global

    cgIm = CGImageCreate
    (320, 480,
     8, 32, 320*4, colorSpace,
     kCGImageAlphaNone | kCGBitmapByteOrder32Little,
     dataProvider, 0, false, kCGRenderingIntentDefault);  // also global, probably doesn't need to be

    self.window.layer.contents = cgIm; // set the UIWindow's CALayer's contents to the image, yay works!

   // CGImageRelease(cgIm);  // we should do this at some stage...
   // CGDataProviderRelease(dataProvider);

    animationTimer = [NSTimer scheduledTimerWithTimeInterval:animationInterval target:self selector:@selector(someRandomFunction) userInfo:nil repeats:YES];
    // set up a timer in the attempt to update the image

}
float col = 0;

-(void)someRandomFunction
{
    // update the original buffer
    for(int i=0;i<imageDataSize;i++)
        imageData[i] = (unsigned char)(int)col;

    col+=256.0f/60.0f;

    // and currently the only way I know how to apply that buffer update to the screen is to
    // create a new image and bind it to the layer...???
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    cgIm = CGImageCreate
    (320, 480,
     8, 32, 320*4, colorSpace,
     kCGImageAlphaNone | kCGBitmapByteOrder32Little,
     dataProvider, 0, false, kCGRenderingIntentDefault);

    CGColorSpaceRelease(colorSpace);

    self.window.layer.contents = cgIm;

    // and that currently works, updating the screen, but i don't know how well it runs...
}


- (void)dealloc {
    [viewController release];
    [window release];
    [super dealloc];
}


@end
matt
  • 4,042
  • 5
  • 32
  • 50
  • Hey I ran into this problem, as well, is the only solution to create and destroy a CGImage every frame? (tried this and its slower than glTexsubimage). It seems that the whole point of the direct provider is that it should point to that buffer so if I change it the image is updated. Any help on this? – Justin Meiners Aug 28 '12 at 15:57
  • I thought the point of using CGDataProviderCreateDirect was direct access to a pixel buffer, refreshed using CADisplayLink. Unfortunately, this is horribly slow. Any workaround discovered? – aoakenfo Jul 08 '13 at 01:40

5 Answers5

11

The fastest App Store approved way to do CPU-only 2D graphics is to create a CGImage backed by a buffer using CGDataProviderCreateDirect and assign that to a CALayer's contents property.

For best results use the kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little or kCGImageAlphaNone | kCGBitmapByteOrder32Little bitmap types and double buffer so that the display is never in an inconsistent state.

edit: this should be faster than drawing to an OpenGL texture in theory, but as always, profile to be sure.

edit2: CADisplayLink is a useful class no matter which compositing method you use.

rpetrich
  • 32,196
  • 6
  • 66
  • 89
  • Would this be fast enough to do 24 fps video? – Stefan Arentz Mar 07 '10 at 13:49
  • @St3fan: I'm hoping its fast enough to do 60fps at least! I'm beginning to get the feeling that the iPhone's Api's and OS are even more bloated than desktop Windows! @rpetrich, thanks, I'll try it out. I hardly know anything about iphone dev, like the GUI stuff ect, so it might take me a while, can you recommend a good template to use to get started? I've only used the open GL one. – matt Mar 07 '10 at 14:30
  • Matt the iPhone APIs are extremely nice compared to Win32. Some things are simply not possible with higher level Cocoa code. File a bug reuqest with Apple if you think this needs a nicer Cocoa API. – Stefan Arentz Mar 07 '10 at 14:33
  • In my experience, writing directly to an OpenGL texture will be the fastest way to present 2-D content to the screen. Core Animation is layered on top of OpenGL, so setting the contents of a CALayer just causes them to be transferred into an OpenGL texture. You can avoid the middleman by writing directly to the texture. On the Mac, we have extensions for direct memory transfer to a texture: http://developer.apple.com/mac/library/documentation/GraphicsImaging/Conceptual/OpenGL-MacProgGuide/opengl_texturedata/opengl_texturedata.html , but I don't see anything similar on the iPhone. – Brad Larson Mar 07 '10 at 15:30
  • @matt: I don't really have any templates suitable for your situation; I don't think you'll be able to get more than 60Hz as the display doesn't update any faster than that. Also, the CoreAnimation was designed with hardware acceleration in mind--as much work as possible is offloaded to the GPU. – rpetrich Mar 07 '10 at 20:58
  • 1
    @Brad Larson: On the iPhone, everything ends up in a QuartzCore scene graph which is rendered by the GPU. In the case of OpenGL content, the scene is rendered to a CALayer by the GPU in your application and then that layer is composited (along with the rest of the graph) to the framebuffer in SpringBoard. OpenGL path: CPU render -> OGL texture -> OGL CALayer -> framebuffer. CALayer path: CPU render -> standard CALayer -> framebuffer. – rpetrich Mar 07 '10 at 21:04
  • @rpetrich: That's interesting, but it runs counter to my experience on the Mac. In my testing, the fastest way to display a 2-D rectangle (a live 60 FPS video feed from a CCD camera) was to use OpenGL directly (hosted in a CAOpenGLLayer). Drawing into the contents of a CALayer was significantly slower. Now, this is a different platform, and I was using the OpenGL texture transfer extensions described above, so I could be wrong about how the iPhone handles this same situation. – Brad Larson Mar 07 '10 at 21:33
  • @Brad Larson: Drawing to the contents of a CALayer will always be slow as that involves a copy (on the CPU no less, where it's slow). Setting the contents, on the other hand can be quick if the new content is already on video memory. Since the iPhone uses a unified memory model, all memory is video memory. I would imagine OS X behaves a lot differently (and the use of IOSurface would be required). – rpetrich Mar 07 '10 at 22:44
  • @rpetrich: Fascinating. So your suggestion would be to use two CGImages with image data provided via a buffer in CGDataProviderCreateDirect(), then swap setting them as the contents for a CALayer for every other frame to be displayed? I'd like to try this out and see how it performs. – Brad Larson Mar 07 '10 at 23:57
  • @Brad Larson: Yeah, that's the gist of it. If it's the only layer visible on-screen, the only copy should be by the GPU from the CALayer to the framebuffer. If you do try it out, I'd love to see it in comparison to piping through OpenGL. – rpetrich Mar 08 '10 at 04:02
  • thanks for the discussion guys, that's quite a bit of insight. I will try this all out tonight. I am a bit worried about whether it will work anyway, we are actually targeting the ipad. it seems to have 4 times the res (1024x768) with basically the same CPU as the 3gs... It is only a basic GUI thing, (advantages over CA? its multi platform, I know how it works, and can tweak it anyway I want). but it might be all too much. also, as I said, I have no idea about iPhone/OSX stuff, so if anyone does try it out, I would love a step by step! – matt Mar 08 '10 at 06:47
  • @matt: The trouble you will encounter is not in getting the buffer to the screen quick enough, but in the drawing to the buffer itself--the GPU has a lot of dedicated hardware designed to push pixels that the CPU can't even come close to. – rpetrich Mar 08 '10 at 08:34
  • @rpetrich: hey, I have come up with some code using the CA method, Im not sure about the way I update the image each frame tho.. deleting it and creating the CGImage each frame... check it out above in the Q "UPDATE2" haven't had a chance to test this on hardware yet, will test tomorrow. thx – matt Mar 18 '10 at 13:45
  • Having spent plenty of time on this problem, I can say confidently this answer is just plain wrong. The performance with CoreGraphics is really quite poor, especially so with Retina devices where many pixels need to be pushed. The right solution is to use OpenGL. Yes, glTextImage2D is a bottleneck, but it is WAY faster in almost every single case. – ldoogy Sep 13 '16 at 02:38
3

The fastest way is to use IOFrameBuffer/IOSurface, which are private frameworks.

So OpenGL seems to be the only possible way for AppStore apps.

kennytm
  • 510,854
  • 105
  • 1,084
  • 1,005
  • I have updated the question with the results from the opengl method, doesn't look good. – matt Mar 11 '10 at 23:26
3

Just to post my comment to @rpetrich's answer in the form of an answer, I will say in my tests I found OpenGL to be the fastest way. I've implemented a simple object (UIView subclass) called EEPixelViewer that does this generically enough that it should work for most people I think.

It uses OpenGL to push pixels in a wide variety of formats (24bpp RGB, 32-bit RGBA, and several YpCbCr formats) to the screen as efficiently as possible. The solution achieves 60fps for most pixel formats on almost every single iOS device, including older ones. Usage is super simple and requires no OpenGL knowledge:

pixelViewer.pixelFormat = kCVPixelFormatType_32RGBA;
pixelViewer.sourceImageSize = CGSizeMake(1024, 768);
EEPixelViewerPlane plane;
plane.width = 1024;
plane.height = 768;
plane.data = pixelBuffer;
plane.rowBytes = plane.width * 4;
[pixelViewer displayPixelBufferPlanes: &plane count: 1 withCompletion:nil];

Repeat the displayPixelBufferPlanes call for each frame (which loads the pixel buffer to the GPU using glTexImage2D), and that's pretty much all there is to it. The code is smart in that it tries to use the GPU for any kind of simple processing required such as permuting the color channels, converting YpCbCr to RGB, etc.

There is also quite a bit of logic for honoring scaling using the UIView's contentMode property, so UIViewContentModeScaleToFit/Fill, etc. all work as expected.

ldoogy
  • 2,819
  • 1
  • 24
  • 38
1

Perhaps you could abstract the methods used in the software renderer to a GPU shader... might get better performance. You'd need to send the encoded "video" data as a texture.

kineticfocus
  • 492
  • 4
  • 16
0

A faster method than both CGDataProvider and glTexSubImage is to use CVOpenGLESTextureCache. The CVOpenGLESTextureCache allows you to directly modify an OpenGL texture in graphics memory without re-uploading.

I used it for a fast animation view you can see here:

https://github.com/justinmeiners/image-sequence-streaming

It is a little tricky to use and I came across it after asking my own question about this topic: How to directly update pixels - with CGImage and direct CGDataProvider

Community
  • 1
  • 1
Justin Meiners
  • 10,754
  • 6
  • 50
  • 92