1

I am trying to convert an image(UIImage*) into black and white(1bpp) bitmap(.bmp) in objective C. First, I get the original pixel data in ARGB 32-bit format(8 bit per component. And then manipulating pixel data to get the black and white version.

I used the same logic for C#, Java, and C++ and it seems to work fine. However, for Objective C( iOS) it did not generate the correct image. What did I do wrong?

Below is my code:

- (unsigned int *) convertTo1BPP:(UIImage *)image {

CGContextRef    context = NULL;
CGColorSpaceRef colorSpace;
unsigned int*  bitmapData;
int             bitmapByteCount;
int             bitmapBytesPerRow;


// Create the bitmap context
CGImageRef imageRef = [image CGImage];
// Get image width, height. We'll use the entire image.
unsigned int width = CGImageGetWidth(imageRef);
unsigned int height = CGImageGetHeight(imageRef);
unsigned int bitmapStride =((width +31)/32)*4;

// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow   = (width* 4);
bitmapByteCount     = (bitmapBytesPerRow * height);


// Use the generic RGB color space.
colorSpace = CGColorSpaceCreateDeviceRGB();
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc( bitmapByteCount );

// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
// kCGImageAlphaPremultipliedFirst
// AAAAAAAARRRRRRRRRGGGGGGGGBBBBBBBB 32 bits per pixel, 8 bits per ARGB component with premultiplied alpha.
context = CGBitmapContextCreate (bitmapData,
                                 width,
                                 height,
                                 8,      // bits per component
                                 bitmapBytesPerRow,
                                 colorSpace,
                                 kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedFirst);
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGContextSetShouldAntialias(context, NO);


// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);

// Now we can get a pointer to the image data associated with the bitmap
// context.
//unsigned char* data = CGBitmapContextGetData (context);
unsigned int* newBitmapData = malloc(((width+31)& ~31)/8 *height);
if (bitmapData != NULL)
{   
    for (unsigned int y = 0; y < height; y++)
    {
        for (unsigned int x = 0; x < width; x++)
        {
            unsigned char mask = 0x80 >> (x & 0x7);

            //Get Pixel data(format 0xAARRGGBB)
            unsigned char rgbPixel = bitmapData[y*width + x];

            //get red color(0x0000AARR & 0xFF => 0x000000RR)
            int red = (unsigned char) (rgbPixel >> 16)& 0xFF;

            //get green color(0x00AARRGG & 0xFF => 0x000000GG)
            int green = (unsigned char) (rgbPixel >> 8)& 0xFF;

            //get blue color(0xAARRGGBB & 0xFF => 0x000000BB)
            int blue = (unsigned char) (rgbPixel >> 0)& 0xFF;

            //calculate brightness
            int brightness = (.299*red + .587*green + .116*blue);

            if(brightness > 128)
            {
                int index = (height-y-1)*bitmapStride + (x/8);
                newBitmapData[index] |= mask;
            }
        }
    }
}

free(bitmapData);
return newBitmapData;

}

Quang Phan
  • 81
  • 7
  • If you don't get 'the correct image,' then what do you get? And what exactly is the correct image? What does 'black and white' mean? Gray scale? Each pixel turning into either black or white? I don't know what it means. Without defining black and white, you are asking "What am I doing wrong"? – El Tomato Jan 07 '14 at 23:27
  • You should be able to use the same logic in Objective-C as in C++. Any difference would be where you interface with platform-specific methods/structures (and, of course, any differences in data structure). You need to describe (*much*) better what didn't work. – Hot Licks Jan 07 '14 at 23:28
  • the function should return the pixel data converted into monochrome(1 bit per pixel). This means that the pixel can only be black(0x00) or white(0xFF). The image im getting back is just a garbage image and not what I passed in. Im suspecting that the original image data buffer in objective c is represented differently? – Quang Phan Jan 07 '14 at 23:47
  • Have you looked at the documentation? I've done some image bashing, and I found that Apple's documentation on the image interfaces was pretty good. – Hot Licks Jan 08 '14 at 00:08
  • (It is important to know that rows are padded out to (IIRC) 16-byte boundaries.) – Hot Licks Jan 08 '14 at 00:09
  • I would just use Core Image. I can't remember exactly, but I think I've used two or three filters including Exposure to turn every pixel into black or white in one project. – El Tomato Jan 08 '14 at 02:06

0 Answers0