4

I am trying to convert a YUV image to CIIMage and ultimately UIImage. I am fairly novice at these and trying to figure out an easy way to do it. From what I have learnt, from iOS6 YUV can be directly used to create CIImage but as I am trying to create it the CIImage is only holding a nil value. My code is like this ->

NSLog(@"Started DrawVideoFrame\n");

CVPixelBufferRef pixelBuffer = NULL;

CVReturn ret = CVPixelBufferCreateWithBytes(
                                            kCFAllocatorDefault, iWidth, iHeight, kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,
                                            lpData, bytesPerRow, 0, 0, 0, &pixelBuffer
                                            );

if(ret != kCVReturnSuccess)
{
    NSLog(@"CVPixelBufferRelease Failed");
    CVPixelBufferRelease(pixelBuffer);
}

NSDictionary *opt =  @{ (id)kCVPixelBufferPixelFormatTypeKey :
                      @(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) };

CIImage *cimage = [CIImage imageWithCVPixelBuffer:pixelBuffer options:opt];
NSLog(@"CURRENT CIImage -> %p\n", cimage);

UIImage *image = [UIImage imageWithCIImage:cimage scale:1.0 orientation:UIImageOrientationUp];
NSLog(@"CURRENT UIImage -> %p\n", image);

Here the lpData is the YUV data which is an array of unsigned character.

This also looks interesting : vImageMatrixMultiply, can't find any example on this. Can anyone help me with this?

d1xlord
  • 239
  • 3
  • 4
  • 12
  • 1
    [This](https://developer.apple.com/library/ios/documentation/graphicsimaging/Conceptual/CoreImaging/ci_performance/ci_performance.html) may help you. – swiftBoy Sep 04 '14 at 07:34
  • 1
    Thanks, I have already checked this link out. I am following this and using options like this but CIImage isn't being initialised – d1xlord Sep 04 '14 at 08:16
  • I found this [link](http://stackoverflow.com/questions/13201084/how-to-convert-a-kcvpixelformattype-420ypcbcr8biplanarfullrange-buffer-to-uiimag?rq=1). This maybe helpful. I will try it out and post the result. – d1xlord Sep 04 '14 at 09:11
  • 1
    The previous one doesn't work properly. I am also looking a way to make this function work : **vImageMatrixMultiply** Can't find any proper example for it to work. Can anyone help me on this? – d1xlord Sep 04 '14 at 11:41
  • Hello, Have you figured out the solution??? if yes please help me... I am facing similar problem.. any resource will be helpful.... – RajibTheKing Oct 17 '15 at 06:37

2 Answers2

5

I have also faced with this similar problem. I was trying to Display YUV(NV12) formatted data to the screen. This solution is working in my project...

//YUV(NV12)-->CIImage--->UIImage Conversion
NSDictionary *pixelAttributes = @{kCVPixelBufferIOSurfacePropertiesKey : @{}};
CVPixelBufferRef pixelBuffer = NULL;

CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
                                      640,
                                      480,
                                      kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
                                      (__bridge CFDictionaryRef)(pixelAttributes),
                                      &pixelBuffer);

CVPixelBufferLockBaseAddress(pixelBuffer,0);
unsigned char *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);

// Here y_ch0 is Y-Plane of YUV(NV12) data.
memcpy(yDestPlane, y_ch0, 640 * 480); 
unsigned char *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);

// Here y_ch1 is UV-Plane of YUV(NV12) data. 
memcpy(uvDestPlane, y_ch1, 640*480/2);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

if (result != kCVReturnSuccess) {
    NSLog(@"Unable to create cvpixelbuffer %d", result);
}

// CIImage Conversion    
CIImage *coreImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];

CIContext *MytemporaryContext = [CIContext contextWithOptions:nil];
CGImageRef MyvideoImage = [MytemporaryContext createCGImage:coreImage
                                                    fromRect:CGRectMake(0, 0, 640, 480)];

// UIImage Conversion
UIImage *Mynnnimage = [[UIImage alloc] initWithCGImage:MyvideoImage 
                                                 scale:1.0 
                                           orientation:UIImageOrientationRight];

CVPixelBufferRelease(pixelBuffer);
CGImageRelease(MyvideoImage);

Here I am showing data structure of YUV(NV12) data and how we can get the Y-Plane(y_ch0) and UV-Plane(y_ch1) which is used to create CVPixelBufferRef. Let's look at the YUV(NV12) data structure.. enter image description here If we look at the picture we can get following information about YUV(NV12),

  • Total Frame Size = Width * Height * 3/2,
  • Y-Plane Size = Frame Size * 2/3,
  • UV-Plane size = Frame Size * 1/3,
  • Data stored in Y-Plane -->{Y1, Y2, Y3, Y4, Y5.....}.
  • U-Plane-->(U1, V1, U2, V2, U3, V3,......}.

I hope it will be helpful to all. :) Have fun with IOS Development :D

Firo
  • 15,448
  • 3
  • 54
  • 74
RajibTheKing
  • 1,234
  • 1
  • 15
  • 35
1

If you have a video frame object that looks like this:

int width, 
int height, 
unsigned long long time_stamp,
unsigned char *yData, 
unsigned char *uData, 
unsigned char *vData,
int yStride 
int uStride 
int vStride

You can use the following to fill up a pixelBuffer:

NSDictionary *pixelAttributes = @{(NSString *)kCVPixelBufferIOSurfacePropertiesKey:@{}};
CVPixelBufferRef pixelBuffer = NULL;
CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
                                        width,
                                        height,
                                        kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,   //  NV12
                                        (__bridge CFDictionaryRef)(pixelAttributes),
                                        &pixelBuffer);
if (result != kCVReturnSuccess) {
    NSLog(@"Unable to create cvpixelbuffer %d", result);
}
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
unsigned char *yDestPlane = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
for (int i = 0, k = 0; i < height; i ++) {
    for (int j = 0; j < width; j ++) {
        yDestPlane[k++] = yData[j + i * yStride]; 
    }
}
unsigned char *uvDestPlane = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
for (int i = 0, k = 0; i < height / 2; i ++) {
    for (int j = 0; j < width / 2; j ++) {
        uvDestPlane[k++] = uData[j + i * uStride]; 
        uvDestPlane[k++] = vData[j + i * vStride]; 
    }
}

Now you can convert it to CIImage:

CIImage *coreImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIContext *tempContext = [CIContext contextWithOptions:nil];
CGImageRef coreImageRef = [tempContext createCGImage:coreImage
                                        fromRect:CGRectMake(0, 0, width, height)];

And UIImage if you need that. (image orientation can vary depending on your input)

UIImage *myUIImage = [[UIImage alloc] initWithCGImage:coreImageRef
                                    scale:1.0
                                    orientation:UIImageOrientationUp];

Don't forget to release the variables:

CVPixelBufferRelease(pixelBuffer);
CGImageRelease(coreImageRef);
Jovan
  • 2,640
  • 2
  • 24
  • 34