2

i have a jpg file. I need to convert it to pixel data and then change color of some pixel. I do it like this:

    NSString *string = [[NSBundle mainBundle] pathForResource:@"pic" ofType:@"jpg"];
    NSData *data = [NSData dataWithContentsOfFile:string];
    unsigned char *bytesArray = dataI.bytes;
    NSUInteger byteslenght = data.length;
   //--------pixel to array
    NSMutableArray *array = [[NSMutableArray alloc] initWithCapacity:byteslenght];
    for (int i = 0; i<byteslenght; i++) {
        [array addObject:[NSNumber numberWithUnsignedChar:bytesArray[i]]];
    }

Here i try to change color of pixels since 95 till 154.

NSNumber *number = [NSNumber numberWithInt:200];
    for (int i=95; i<155; i++) {
        [array replaceObjectAtIndex:i withObject:number];
    }

But when i convert array to image i got a blurred picture. I don't understand why i don't have an influence on some pixels and why i have influence on picture in total?

Rob
  • 415,655
  • 72
  • 787
  • 1,044
user2032083
  • 313
  • 2
  • 4
  • 14
  • 1
    You read a JPEG-compressed image. To manipulate pixels, you have to decompress the file to a bitmap first ... – Martin R Apr 06 '14 at 17:32
  • I did it with bmp file and when i change some pixels then i receive an array of (int)0. and when i put it in UIImage data, debug tells me UIImage is nil. ??? – user2032083 Apr 06 '14 at 17:55

1 Answers1

2

The process of accessing pixel-level data is a little more complicated than your question might suggest, because, as Martin pointed out, JPEG can be a compressed image format. Apple discusses the approved technique for getting pixel data in Technical Q&A QA1509.

Bottom line, to get the uncompressed pixel data for a UIImage, you would:

  1. Get the CGImage for the UIImage.

  2. Get the data provider for that CGImageRef via CGImageGetDataProvider.

  3. Get the binary data associated with that data provider via CGDataProviderCopyData.

  4. Extract some of the information about the image, so you know how to interpret that buffer.

Thus:

UIImage *image = ...

CGImageRef imageRef = image.CGImage;                                     // get the CGImageRef
NSAssert(imageRef, @"Unable to get CGImageRef");

CGDataProviderRef provider = CGImageGetDataProvider(imageRef);           // get the data provider
NSAssert(provider, @"Unable to get provider");

NSData *data = CFBridgingRelease(CGDataProviderCopyData(provider));      // get copy of the data
NSAssert(data, @"Unable to copy image data");

NSInteger       bitsPerComponent = CGImageGetBitsPerComponent(imageRef); // some other interesting details about image
NSInteger       bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
NSInteger       bitsPerPixel     = CGImageGetBitsPerPixel(imageRef);
CGBitmapInfo    bitmapInfo       = CGImageGetBitmapInfo(imageRef);
NSInteger       bytesPerRow      = CGImageGetBytesPerRow(imageRef);
NSInteger       width            = CGImageGetWidth(imageRef);
NSInteger       height           = CGImageGetHeight(imageRef);
CGColorSpaceRef colorspace       = CGImageGetColorSpace(imageRef);

Given that you want to manipulate this, you presumably want some mutable pixel buffer. The easiest approach would be to make a mutableCopy of that NSData object and manipulate it there, but in these cases, I tend to fall back to C, creating a void *outputBuffer, into which I copy the original pixels and manipulate using traditional C array techniques.

To create the buffer:

void *outputBuffer = malloc(width * height * bitsPerPixel / 8);
NSAssert(outputBuffer, @"Unable to allocate buffer");

For the precise details on how to manipulate it, you have to look at bitmapInfo (which will tell you whether it's RGBA or ARGB; whether it's floating point or integer) and bitsPerComponent (which will tell you whether it's 8 or 16 bits per component, etc.). For example, a very common JPEG format is 8 bits per component, four components, RGBA (i.e. red, green, blue, and alpha, in that order). But you really need to check those various properties we extracted from the CGImageRef to make sure. See the discussion in the Quartz 2D Programming Guide - Bitmap Images and Image Masks for more information. I personally find "Figure 11-2" to be especially illuminating.

The next logical question is when you're done manipulating the pixel data, how to create a UIImage for that. In short, you'd reverse the above process, e.g. create a data provider, create a CGImageRef, and then create a UIImage:

CGDataProviderRef outputProvider = CGDataProviderCreateWithData(NULL, outputBuffer, sizeof(outputBuffer), releaseData);

CGImageRef outputImageRef = CGImageCreate(width,
                                          height,
                                          bitsPerComponent,
                                          bitsPerPixel,
                                          bytesPerRow,
                                          colorspace,
                                          bitmapInfo,
                                          outputProvider,
                                          NULL,
                                          NO,
                                          kCGRenderingIntentDefault);

UIImage *outputImage = [UIImage imageWithCGImage:outputImageRef];

CGImageRelease(outputImageRef);
CGDataProviderRelease(outputProvider);

Where releaseData is a C function that simply calls free the pixel buffer associated with the data provider:

void releaseData(void *info, const void *data, size_t size)
{
    free((void *)data);
}
Rob
  • 415,655
  • 72
  • 787
  • 1,044
  • Thanks, try to do it))) – user2032083 Apr 06 '14 at 18:08
  • @Rob I have tried as many possible solutions as I can find but can't find the right way to capture images with avcapturesession and then use this cgimage and load them into an imageview collection with correct orientation. If you have any free time, the q is here: https://stackoverflow.com/questions/46913953/avcapture-image-orientation – user2363025 Oct 31 '17 at 10:09