3

I've got a fairly visually complex app that has a base UIViewController and several UIViews (subclassed and extended by me) on it. Periodically, I throw up UIAlertViews and UIPopOverControllers.

I'm working toward a video-recording solution so that as users work through the app, it records what's going on for a later debrief.

I've got a partially working solution, but it's very slow (can't really take more than about 1 frame per second), has some kinks (images are currently rotated and skewed, but I think I can fix that) and is not my idea of an ideal solution.

I hopped off that line of thinking and went toward implementing a solution that uses UIGetImageFromCurrentImageContext(), but that keeps giving me nil images, even when called from within drawRect:.

It occurs to me, though, that I don't want to continuously be calling drawRect: just to get a screen shot! I don't want to actually initiate any extra drawing, just capture what's on screen.

I'm happy to post code that I'm using, but it's not really working yet. Does anyone know of a good solution to do what I'm seeking?

The one solution I did find doesn't fully work for me since it doesn't ever seem to capture UIAlertViews and other overlaid views.

Any help?

Thanks!

bump

mbm29414
  • 11,558
  • 6
  • 56
  • 87
  • You can only call `UIGetImageFromCurrentImageContent()` when you have previously called `UIBeginImageContext()` prior to any drawing. Note that this will also draw in the new context, instead of to the screen. – Sam May 07 '12 at 13:42

4 Answers4

2

I was unable to do full-size real-time video encoding. However, as an alternative, consider this.

Instead of recording frames, record actions (with their timestamps) as they occur. Then, when you want to play back, just replay the actions. You already have the code, because you execute it in "real life."

All you do is replay those same actions, relative to one another in time.

EDIT

If you want to try the recording, here's what I did (note, I abandoned it... it was an experiment in progress, so just take it as an example of how I approached it... nothing is production-ready). I was able to record live audio/video at 640x360, but that resolution was too low for me. It looked fine on the iPad, but terrible when I moved the video to my mac and watched it there.

I had problems with higher resolutions. I adapted the bulk of this code from the RosyWriter example project. Here are the main routines for setting up the asset writer, starting the recording, and adding a UIImage to the video stream.

Good luck.

CGSize const VIDEO_SIZE = { 640, 360 };

- (void) startRecording
{
    dispatch_async(movieWritingQueue, ^{
        NSLog(@"startRecording called in state 0x%04x", state);
        if (state != STATE_IDLE) return;
        state = STATE_STARTING_RECORDING;
        NSLog(@"startRecording changed state to 0x%04x", state);

        NSError *error;
        //assetWriter = [[AVAssetWriter alloc] initWithURL:movieURL fileType:AVFileTypeQuickTimeMovie error:&error];
        assetWriter = [[AVAssetWriter alloc] initWithURL:movieURL fileType:AVFileTypeMPEG4 error:&error];
        if (error) {
            [self showError:error];
        }
        [self removeFile:movieURL];
        [self resumeCaptureSession];
        [self.delegate recordingWillStart];
    }); 
}


// TODO: this is where we write an image into the movie stream...
- (void) writeImage:(UIImage*)inImage
{
    static CFTimeInterval const minInterval = 1.0 / 10.0;

    static CFAbsoluteTime lastFrameWrittenWallClockTime;
    CFAbsoluteTime thisFrameWallClockTime = CFAbsoluteTimeGetCurrent();
    CFTimeInterval timeBetweenFrames = thisFrameWallClockTime - lastFrameWrittenWallClockTime;
    if (timeBetweenFrames < minInterval) return;

    // Not really accurate, but we just want to limit the rate we try to write frames...
    lastFrameWrittenWallClockTime = thisFrameWallClockTime;

    dispatch_async(movieWritingQueue, ^{
        if ( !assetWriter ) return;

        if ((state & STATE_STARTING_RECORDING) && !(state & STATE_MASK_VIDEO_READY)) {
            if ([self setupAssetWriterImageInput:inImage]) {
                [self videoIsReady];
            }
        }
        if (state != STATE_RECORDING) return;
        if (assetWriter.status != AVAssetWriterStatusWriting) return;

        CGImageRef cgImage = CGImageCreateCopy([inImage CGImage]);
        if (assetWriterVideoIn.readyForMoreMediaData) {
            CVPixelBufferRef pixelBuffer = NULL;

            // Resize the original image...
            if (!CGSizeEqualToSize(inImage.size, VIDEO_SIZE)) {
                // Build a context that's the same dimensions as the new size
                CGRect newRect = CGRectIntegral(CGRectMake(0, 0, VIDEO_SIZE.width, VIDEO_SIZE.height));
                CGContextRef bitmap = CGBitmapContextCreate(NULL,
                                                            newRect.size.width,
                                                            newRect.size.height,
                                                            CGImageGetBitsPerComponent(cgImage),
                                                            0,
                                                            CGImageGetColorSpace(cgImage),
                                                            CGImageGetBitmapInfo(cgImage));

                // Rotate and/or flip the image if required by its orientation
                //CGContextConcatCTM(bitmap, transform);

                // Set the quality level to use when rescaling
                CGContextSetInterpolationQuality(bitmap, kCGInterpolationHigh);

                // Draw into the context; this scales the image
                //CGContextDrawImage(bitmap, transpose ? transposedRect : newRect, imageRef);
                CGContextDrawImage(bitmap, newRect, cgImage);

                // Get the resized image from the context and a UIImage
                CGImageRef newImageRef = CGBitmapContextCreateImage(bitmap);
                CGContextRelease(bitmap);
                CGImageRelease(cgImage);
                cgImage = newImageRef;
            }

            CFDataRef image = CGDataProviderCopyData(CGImageGetDataProvider(cgImage));

            int status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, self.assetWriterPixelBufferAdaptor.pixelBufferPool, &pixelBuffer);
            if(status != 0){
                //could not get a buffer from the pool
                NSLog(@"Error creating pixel buffer:  status=%d", status);
            }
            // set image data into pixel buffer
            CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
            uint8_t* destPixels = CVPixelBufferGetBaseAddress(pixelBuffer);

            // Danger, Will Robinson!!!!!  USE_BLOCK_IN_FRAME warning...
            CFDataGetBytes(image, CFRangeMake(0, CFDataGetLength(image)), destPixels);

            if(status == 0){
                //CFAbsoluteTime thisFrameWallClockTime = CFAbsoluteTimeGetCurrent();
                CFTimeInterval elapsedTime = thisFrameWallClockTime - firstFrameWallClockTime;
                CMTime presentationTime = CMTimeAdd(firstBufferTimeStamp, CMTimeMake(elapsedTime * TIME_SCALE, TIME_SCALE));
                BOOL success = [self.assetWriterPixelBufferAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:presentationTime];


                if (!success)
                    NSLog(@"Warning:  Unable to write buffer to video");
            }

            //clean up
            CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
            CVPixelBufferRelease( pixelBuffer );
            CFRelease(image);
            CGImageRelease(cgImage);
        } else {
            NSLog(@"Not ready for video data");
        }
    });
}


-(BOOL) setupAssetWriterImageInput:(UIImage*)image
{
    NSDictionary* videoCompressionProps = [NSDictionary dictionaryWithObjectsAndKeys:
                                           [NSNumber numberWithDouble:1024.0*1024.0], AVVideoAverageBitRateKey,
                                           nil ];

    NSDictionary* videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                   AVVideoCodecH264, AVVideoCodecKey,
                                   //[NSNumber numberWithInt:image.size.width], AVVideoWidthKey,
                                   //[NSNumber numberWithInt:image.size.height], AVVideoHeightKey,
                                   [NSNumber numberWithInt:VIDEO_SIZE.width], AVVideoWidthKey,
                                   [NSNumber numberWithInt:VIDEO_SIZE.height], AVVideoHeightKey,

                                   videoCompressionProps, AVVideoCompressionPropertiesKey,
                                   nil];
    NSLog(@"videoSettings: %@", videoSettings);

    assetWriterVideoIn = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
    NSParameterAssert(assetWriterVideoIn);
    assetWriterVideoIn.expectsMediaDataInRealTime = YES;
    NSDictionary* bufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys:
                                      [NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, nil];

    self.assetWriterPixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoIn sourcePixelBufferAttributes:bufferAttributes];

    //add input
    if ([assetWriter canAddInput:assetWriterVideoIn]) {
        [assetWriter addInput:assetWriterVideoIn];
    }
    else {
        NSLog(@"Couldn't add asset writer video input.");
        return NO;
    }

    return YES;
}
Jody Hagins
  • 27,943
  • 6
  • 58
  • 87
  • Seems like a lot of work to reconstruct that for my purposes. I'm also looking at playing it side-by-side with video from the camera, time-synced against the actions. Also, I'm looking at doing an export of both of those, so at some point I have to capture screenshots. Any suggestions? – mbm29414 May 03 '12 at 16:06
  • The iPad just does not have enough power to encode video real-time, unless you drop the frame size down considerably. You could save everything and do it later, but the user will have to sit there while it does the processing. – Jody Hagins May 03 '12 at 16:28
  • I'm actually happy to reduce the frame size, but I'm unclear how to do that well without it actually **ADDING** processing time (for it to do the size reduction). If I'm missing something, ***PLEASE*** update your post with a workable solution. I would really like to get something :-). – mbm29414 May 03 '12 at 16:45
  • You mentioned that I might be able to do what I want if I "drop the frame size down considerably". That's a viable option because my text on-screen is fairly large, and we don't need super hi-res images. Do you have any suggestions of how I might accomplish this? It seems like maybe something like a `renderInContext` with a scale factor could be helpful, but this isn't my area of expertise. – mbm29414 May 06 '12 at 12:48
  • You can control that with the setting on the video encoder. Look at the various AVAssetExport* options (specifically consider AVAssetExportPresetLowQuality or AVAssetExportPreset640x480). Also, consider the rate at which you present frames. You should be able to comfortably keep up with feeding 15 FPS at 640x480. – Jody Hagins May 06 '12 at 18:08
  • That sounds great! For 50 rep, can you write a code block that sets it up? – mbm29414 May 06 '12 at 18:10
  • Jody, that new code looks promising! The bounty is running out, so I went ahead and awarded it to you. Thanks so much! I hope you'll keep checking this question in case I have more issues that pop up, but again... **THANK YOU SO MUCH!!!** – mbm29414 May 07 '12 at 14:29
1

Maybe you can have better performances by using UIGetScreenImage, coupled with a AVAssetWriter to save a h264 mp4 file for example.

Add an AVAssetWriterInput to your asset writer and call :

- (BOOL)appendSampleBuffer:(CMSampleBufferRef)sampleBuffer

at regular interval (maybe you can schedule a timer on your main thread), and create a sample buffer from the UIImage gathered by calling UIGetScreenImage().

The AVAssetWriterInputPixelBufferAdaptor might also be useful with this method :

- (BOOL)appendPixelBuffer:(CVPixelBufferRef)pixelBuffer withPresentationTime:(CMTime)presentationTime

You can look at this question to convert an CGImageRef to a CVPixelBufferRef.

Note on UIGetScreenImage, it is a private API, but has been allowed by Apple : https://devforums.apple.com/message/149553#149553

Community
  • 1
  • 1
ıɾuǝʞ
  • 2,829
  • 26
  • 38
  • Basic testing with this method (on a new iPad) seems to indicate that it takes about 0.6 seconds ***per frame*** to use `UIGetScreenImage()`. Plus, the performance of my app suffers terribly using this. Maybe it's just not possible!?! – mbm29414 May 02 '12 at 02:20
  • I was thinking it might have better perfomance as RedLaser (datamatrix scanning lib) was using it : http://stackoverflow.com/a/1726132/536308 – ıɾuǝʞ May 02 '12 at 12:34
1

I think you should take a serious step back and consider what the final effect that are you trying to achieve is. I do not think continuing down this path will lead you to very much success. If your end goal is simply analytics information of the actions that your users are taking, there are quite a few iOS flavored analytic tools available at your disposal. You should be capturing information that is more useful than simply screen grabs. Unfortunate since your question is very focused on the framerate issues you have discovered I am unable to determine your final goal.

I cannot recommend against both using private APIs and attempting to record the screen data in real time. This is simply too much information to process. This is coming from a game development veteran who has pushed the iOS devices to their limits. That said, if you do come up with a workable solution (likely something very low level involving CoreGraphics and heavily compressed images), I would love to hear about it.

Sam
  • 2,579
  • 17
  • 27
  • The purpose of the screen recording is to replay what's happening on the screen real-time (which will be displayed next to a video captured from the camera, time-synced) so that we can train users to a particular task with higher fidelity. My purpose is much less analytics-related than I think you read into my question. At this point, I'm doing some serious thinking about @JodyHagins' answer to store the user interactions and re-make my screen and call the actions again as the video progresses. The problem with that is that I intend to allow users to jump around in the video. – mbm29414 May 06 '12 at 12:45
  • Second comment due to limited comment space: Based on current results of training in this particular area (high-stakes medical intervention), we really need the ability to have all kinds of data at our disposal during the debrief. I'm open to thinking outside of the box, but I'm pretty sure I need something ***at least like*** what I've outlined. Thanks! – mbm29414 May 06 '12 at 12:46
  • @mbm30075 Thanks for the additional information that you have provided! Based on your comments, I have another suggestion. If the users will be using this app in a monitored training scenario, perhaps the best method is to purchase a TV Out Dock cable and record the screen using this method. Trying to think out of the box here, let me know if this isn't feasible. – Sam May 06 '12 at 19:14
  • I appreciate the "out of the box" thinking. While it **will** be used in such a training scenario, it will also be used in real-life scenarios, and the video of the screen with then be useful not just for debrief but also for audit/quality assurance/etc... I really think I need to figure out some sort of rapid-execution screen capturing capability. I can't believe this is that difficult to do! ;-) – mbm29414 May 06 '12 at 19:24
  • I timestamp the start of recording, and every event is time-stamped. Thus, it is pretty easy to seek to any time, relative to the start of the recording. – Jody Hagins May 07 '12 at 13:14
1

If you're targeting an iPhone 4S or iPad 2 (or newer) then AirPlay mirroring may do what you need. There is a Mac app called Reflection that will receive AirPlay content, and even has the ability to record a .mov directly. This may not be feasible to support with a diverse set of users or locations, but it has been helpful to me.

MattP
  • 1,920
  • 1
  • 15
  • 22