1

For the past 4 to 5 hours I've been wrestling with this very bizarre issue. I have a an array of bytes which contain pixel values out of which I'll like to make an image of. The array represents 32 bit per component values. There is no Alpha channel, so the image is 96 bits/pixel.

I have specified all of this to the CGImageCreate function as follows:

  CGImageRef img = CGImageCreate(width, height, 32, 96, bytesPerRow, space, kCGImageAlphaNone , provider, NULL, NO, kCGRenderingIntentDefault);

bytesPerRow is 3*width*4. This is so because there are 3 components per pixel, and each component takes 4 bytes (32 bits). So, total bytes per row is 3*4*width. The data provider is defined as follows:

     CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,bitmapData,3*4*width*height,NULL);

This is where things get bizarre. In my array, I am explicity setting the values to be 0x000000FF (for all 3 channels) and yet, the image is coming out to be completely white. If I set the value to 0xFFFFFF00, the image comes out to be black. This is telling me that the program is, for some reason, not reading all of the 4 bytes for each component and is instead reading the least significant byte. I have tried all sorts of combinations - even including an Alpha channel, but it has made no difference to this.

The program is blind to this: 0xAAAAAA00. It simply reads this as 0. When I'm explicity specifying that the bits per component are 32 bits, shouldn't the function take this into account and actually read 4 bytes from the array?

The bytes array is defined as: bitmapData = (char*)malloc(bytesPerRow*height); And I am assigning values to the array as follows

 for(i=0;i<width*height;i++)
{
    *((unsigned int *)(bitmapData + 12*i + 0)) = 0xFFFFFF00;
    *((unsigned int *)(bitmapData + 12*i + 4)) = 0xFFFFFF00;
    *((unsigned int *)(bitmapData + 12*i + 8)) = 0xFFFFFF00;
}

Note that I address the array as an int to address 4 bytes of memory. i is multiplied by 12 because there are 12 bytes per pixel. The addition of 4 and 8 allow the loop to address the green and blue channels. Note that I have inspected the memory of the array in the debugger and that seems to be perfectly OK. The loop is writing to 4 bytes. Any sort of pointers to this would be MOST helpful. My ultimate goal is to be able to read 32 bit FITS files - for which I already have the program written. I am only testing the above code with the above array.

Here the code in its entirety if it matters. This is in drawRect:(NSRect)dirtyRect method of my custom view:

int width, height, bytesPerRow;
int i;

width = 256;
height = 256;
bytesPerRow = 3*width*4;

char *bitmapData;
bitmapData = (char*)malloc(bytesPerRow*height);
for(i=0;i<width*height;i++)
{
    *((unsigned int *)(bitmapData + 12*i + 0)) = 0xFFFFFF00;
    *((unsigned int *)(bitmapData + 12*i + 4)) = 0xFFFFFF00;
    *((unsigned int *)(bitmapData + 12*i + 8)) = 0xFFFFFF00;
}
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,bitmapData,3*4*width*height,NULL);
CGColorSpaceRef space = CGColorSpaceCreateDeviceRGB();

CGImageRef img = CGImageCreate(width, height, 32, 96, bytesPerRow, space, kCGImageAlphaNone, provider, NULL, NO, kCGRenderingIntentDefault);

CGColorSpaceRelease(space);
CGDataProviderRelease(provider);

CGContextRef theContext = [[NSGraphicsContext currentContext] graphicsPort];
CGContextDrawImage(theContext, CGRectMake(0,0,width,height), img);
saad
  • 1,225
  • 15
  • 31
  • 1
    32 bits/component seems like a rather odd image format; do you really need 1.415*10^9864 different colors?!?!?! – bbum Mar 31 '11 at 03:00
  • The FITS file format does specify 32 bit components per channel, so yes. – saad Mar 31 '11 at 03:05
  • 1
    Sure; but... do you have a device that can render that many colors? (Can I have it?) It wouldn't surprise me if CoreGraphics has a cutoff somewhere. Try 16 bit/component and 24 bit/component. Start with something simple that works and expand from there. – bbum Mar 31 '11 at 03:36
  • I do have 16 bit/component working with NSImageBitRep. The reason I was trying out CG was because it seemed to support 32 bits. I am not thinking "more is better", but previously I was trying to display a 32 bit image in 16 bit range by dividing it by 0xFFFF. This worked, but there was always some differences between it and its true 16 bit counterpart. Yet, if I opened the same two images in PS, there was absolutely no differences. What could be the reason for this? Should I not divide a 32bit image by 0xFFFF to bring it down to 16 bit range? Is that not the correct approach? – saad Mar 31 '11 at 09:17

1 Answers1

2

I see a few things worth pointing out:

First, the Quartz 2D Programming Guide doesn't list 96-bpp RGB as a supported format. You might try 128-bpp RGB.

Second, you're working on a little-endian system*, which means LSB comes first. Change the values to which you set each component to 0x33000000EE and you will see a light grey (EE), not a dark grey (33).

Most importantly, bbum is absolutely right when he points out that your display can't render that range of color**. It's getting squashed down to 8-bpc just for display. If it's correct in memory, then it's correct in memory.


*: More's the pity. R.I.P PPC.

**: Maybe NASA has one that can?

jscs
  • 63,694
  • 13
  • 151
  • 195
  • Thanks. For what its worth, I do have 16 bits working with NSImageBitRep. I understand the point about everything being displayed in 8 bit but then, how do I accurately display a 32 bit image in 8 bit or even 16 bits? An approach that I was taking previously was simply to divide the entire image by 0xFFFF. This brought a 32 bit image in 16 bit range and I was able to display without clipping. However, there were always some differences in brightness if I had previously saved the file as 16 bits. What is the best approach to this - how can I deal with both 16 bit and 32 bit images? – saad Mar 31 '11 at 09:12
  • @Saad: What's the best function to use to reduce color depth from 32-bit to 8 for display? I'm afraid I haven't the foggiest idea. That sounds like someone's signal processing thesis project (which is not my area of expertise, unfortunately). I'd guess that there's a strong dependence on the contents of your image. It doesn't surprise me that Photoshop is able to at least attempt the display (although you are of course still losing information when it's put onscreen). There may be an open-source FITS library floating around the web whose bit-depth reduction code you can peruse. – jscs Mar 31 '11 at 19:36
  • Thanks Josh. I am already using the most advanced FITS library out there (CFITSIO). Do you think posting another question regarding this matter is a good idea? Is that allowed at StackOverflow? – saad Mar 31 '11 at 22:51
  • 1
    @Saad: You may still get a more useful answer to this one, but if you mean ask a new question about another specific aspect of the topic (like figuring out the transfer function), then not only is it allowed, it's the correct protocol. You should absolutely post it -- I think it's a good question and I will personally be interested to read the responses. I would only suggest that you make the new question more general, leaving out the CoreGraphics stuff, or you'll just get more people like me and bbum telling you about the limitations of CG. – jscs Apr 01 '11 at 01:33