4

You know the sample code of Apple with the CameraRipple effect? Well I'm trying to record the camera output in a file after openGL has done all the cool effect of water.

I've done it with glReadPixels, where I read all the pixels in a void * buffer , create CVPixelBufferRef and append it to the AVAssetWriterInputPixelBufferAdaptor, but it's too slow, coz readPixels takes tons of time. I found out that using FBO and texture cash you can do the same, but faster. Here is my code in drawInRect method that Apple use:

CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge void *)_context, NULL, &coreVideoTextureCashe);
if (err) 
{
    NSAssert(NO, @"Error at CVOpenGLESTextureCacheCreate %d");
}


CFDictionaryRef empty; // empty value for attr value.
CFMutableDictionaryRef attrs2;
empty = CFDictionaryCreate(kCFAllocatorDefault, // our empty IOSurface properties dictionary
                           NULL,
                           NULL,
                           0,
                           &kCFTypeDictionaryKeyCallBacks,
                           &kCFTypeDictionaryValueCallBacks);
attrs2 = CFDictionaryCreateMutable(kCFAllocatorDefault,
                                  1,
                                  &kCFTypeDictionaryKeyCallBacks,
                                  &kCFTypeDictionaryValueCallBacks);

CFDictionarySetValue(attrs2,
                     kCVPixelBufferIOSurfacePropertiesKey,
                     empty);

//CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &renderTarget);
CVPixelBufferRef pixiel_bufer4e = NULL;

CVPixelBufferCreate(kCFAllocatorDefault, 
                    (int)_screenWidth, 
                    (int)_screenHeight,
                    kCVPixelFormatType_32BGRA,
                    attrs2,
                    &pixiel_bufer4e);
CVOpenGLESTextureRef renderTexture;
CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault,
                                              coreVideoTextureCashe, pixiel_bufer4e,
                                              NULL, // texture attributes
                                              GL_TEXTURE_2D,
                                              GL_RGBA, // opengl format
                                              (int)_screenWidth, 
                                              (int)_screenHeight,
                                              GL_BGRA, // native iOS format
                                              GL_UNSIGNED_BYTE,
                                              0,
                                              &renderTexture);
CFRelease(attrs2);
CFRelease(empty);
glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);

CVPixelBufferLockBaseAddress(pixiel_bufer4e, 0);

if([pixelAdapter appendPixelBuffer:pixiel_bufer4e withPresentationTime:currentTime]) {
                float result = currentTime.value;
            NSLog(@"\n\n\4eta danni i current time e : %f \n\n",result);
                currentTime = CMTimeAdd(currentTime, frameLength);
        }

CVPixelBufferUnlockBaseAddress(pixiel_bufer4e, 0);
CVPixelBufferRelease(pixiel_bufer4e);
CFRelease(renderTexture);
CFRelease(coreVideoTextureCashe);

It records a video and it's pretty quick, yet the video is just black I think the textureCasheRef is not the right one or am I filling it wrong.

As an update, here is another way I've tried. I must be missing something. In viewDidLoad, after I set the openGL context I do this:

CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge   void *)_context, NULL, &coreVideoTextureCashe);

    if (err) 
    {
        NSAssert(NO, @"Error at CVOpenGLESTextureCacheCreate %d");
    }

    //creats the pixel buffer

    pixel_buffer = NULL;
    CVPixelBufferPoolCreatePixelBuffer (NULL, [pixelAdapter pixelBufferPool], &pixel_buffer);

    CVOpenGLESTextureRef renderTexture;
    CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault, coreVideoTextureCashe, pixel_buffer,
                                                  NULL, // texture attributes
                                                  GL_TEXTURE_2D,
                                                  GL_RGBA, //  opengl format
                                                   (int)screenWidth,
                                                  (int)screenHeight,
                                                  GL_BGRA, // native iOS format
                                                  GL_UNSIGNED_BYTE,
                                                  0,
                                                  &renderTexture);

    glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture));
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);

Then in drawInRect: I do this:

 if(isRecording&&writerInput.readyForMoreMediaData) {
    CVPixelBufferLockBaseAddress(pixel_buffer, 0);

    if([pixelAdapter appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) {
        currentTime = CMTimeAdd(currentTime, frameLength);
    }
    CVPixelBufferLockBaseAddress(pixel_buffer, 0);
    CVPixelBufferRelease(pixel_buffer);
}

Yet it crashes with bad_acsess on the renderTexture, which is not nil but 0x000000001.

UPDATE

With the code below I actually managed to pull the video file, but there are some green and red flashes. I use BGRA pixelFormatType.

Here I create the texture Cache:

CVReturn err2 = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge void *)_context, NULL, &coreVideoTextureCashe);
if (err2) 
{
    NSLog(@"Error at CVOpenGLESTextureCacheCreate %d", err);
    return;
}

And then in drawInRect I call this:

if(isRecording&&writerInput.readyForMoreMediaData) {
    [self cleanUpTextures];



    CFDictionaryRef empty; // empty value for attr value.
    CFMutableDictionaryRef attrs2;
    empty = CFDictionaryCreate(kCFAllocatorDefault, // our empty IOSurface properties dictionary
                           NULL,
                           NULL,
                           0,
                           &kCFTypeDictionaryKeyCallBacks,
                           &kCFTypeDictionaryValueCallBacks);
    attrs2 = CFDictionaryCreateMutable(kCFAllocatorDefault,
                                   1,
                                   &kCFTypeDictionaryKeyCallBacks,
                                   &kCFTypeDictionaryValueCallBacks);

    CFDictionarySetValue(attrs2,
                     kCVPixelBufferIOSurfacePropertiesKey,
                     empty);

//CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &renderTarget);
    CVPixelBufferRef pixiel_bufer4e = NULL;

    CVPixelBufferCreate(kCFAllocatorDefault, 
                    (int)_screenWidth, 
                    (int)_screenHeight,
                    kCVPixelFormatType_32BGRA,
                    attrs2,
                    &pixiel_bufer4e);
    CVOpenGLESTextureRef renderTexture;
    CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault,
                                              coreVideoTextureCashe, pixiel_bufer4e,
                                              NULL, // texture attributes
                                              GL_TEXTURE_2D,
                                              GL_RGBA, // opengl format
                                              (int)_screenWidth, 
                                              (int)_screenHeight,
                                              GL_BGRA, // native iOS format
                                              GL_UNSIGNED_BYTE,
                                              0,
                                              &renderTexture);
    CFRelease(attrs2);
    CFRelease(empty);
    glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture));
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);

    CVPixelBufferLockBaseAddress(pixiel_bufer4e, 0);

    if([pixelAdapter appendPixelBuffer:pixiel_bufer4e withPresentationTime:currentTime]) {
        float result = currentTime.value;
        NSLog(@"\n\n\4eta danni i current time e : %f \n\n",result);
        currentTime = CMTimeAdd(currentTime, frameLength);
    }

    CVPixelBufferUnlockBaseAddress(pixiel_bufer4e, 0);
    CVPixelBufferRelease(pixiel_bufer4e);
    CFRelease(renderTexture);
  //  CFRelease(coreVideoTextureCashe);
}

I know I can optimize this a lot by not doing all these things here, yet I use wanted to make it work. In cleanUpTextures I flush the textureCache with:

 CVOpenGLESTextureCacheFlush(coreVideoTextureCashe, 0);

Something might be wrong with the RGBA stuff or I don't know but it seems that it's still getting kind of wrong Cache.

user1562826
  • 69
  • 1
  • 3

1 Answers1

4

For recording video, this isn't the approach I'd use. You're creating a new pixel buffer for each rendered frame, which will be slow, and you're never releasing it, so it's no surprise you're getting memory warnings.

Instead, follow what I describe in this answer. I create a pixel buffer for the cached texture once, assign that texture to the FBO I'm rendering to, then append that pixel buffer using the AVAssetWriter's pixel buffer input on every frame. It's far faster to use the single pixel buffer than recreating one every frame. You also want to leave the pixel buffer associated with your FBO's texture target, rather than associating it on every frame.

I encapsulate this recording code within the GPUImageMovieWriter in my open source GPUImage framework, if you want to see how this works in practice. As I indicate in the above-linked answer, doing the recording in this fashion leads to extremely fast encodes.

Community
  • 1
  • 1
Brad Larson
  • 170,088
  • 45
  • 397
  • 571
  • ok, here's what i do in drawInRect and it's fast but it records black video. I think the textureCasheRef it's getting is empty or not the right one , dunno I updated the question – user1562826 Jul 31 '12 at 14:11
  • @user1562826 - In the future, feel free to just update your original question with the new information. I've done that here for you. You aren't trying to access this pixel buffer and its bound texture on a different thread than your rendering, are you? Simultaneous access to an OpenGL ES context from multiple threads can lead to crashes. – Brad Larson Jul 31 '12 at 16:23
  • No, I use only one thread. I managed to get the video file, yet I still think the texture Cache is not properly extracted from the OpenGL context because in the video there is this line from the two corners of the screen, and its kinda red above the line and kinda green below and there are these strange flashes. I update my question. Thanks for the help! – user1562826 Aug 01 '12 at 12:46
  • @BradLarson "I create a pixel buffer for the cached texture once" -- how does this work? Why does Apple create and destroy it all every single frame in their sample code? It seems to me horribly inefficient, but with zero docs from Apple, I don't see any alternative. I want to understand the simple case here, but I keep hitting your answers (it's impressive what you've worked out!) that go in circles x-referencing each other, and saying "this is a replacement for glReadPixels", but not explaining the other uses. – Adam Jan 05 '14 at 23:13
  • @Adam - I'm not sure what Apple sample code you're referring to, but they didn't have any examples that did this at the time that I wrote the above-linked answer (and the code there). For the texture caches, there's overhead in setting up the pixel buffer, which is why you do it once, but after that point the internal bytes are directly mapped to your texture. You then want to reuse the pixel buffer for maximum performance when recording, as the texture bytes within it will be updated as the texture contents are. – Brad Larson Jan 06 '14 at 02:57