1

I am trying to write code that can crop an existing image down to some specified size/region. I am working with DICOM images, and the API I am using allows me to get pixel values directly. I've placed pixel values of the area of interest within the image into an array of floats (dstImage, below).

Where I'm encountering trouble is with the actual construction/creation of the new, cropped image file using this pixel data. The source image is grayscale, however all of the examples I have found online (like this one) have been for RGB images. I tried to follow the example in that link, adjusting for grayscale and trying numerous different values, but I continue to get errors on the CGBitmapContextCreate line of code and still do not clearly understand what those values are supposed to be.

My intensity values for the source image go above 255, so my impression is that this is not 8-bit Grayscale, but 16-bit Grayscale.

Here is my code:

CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();

CGContextRef context;
context = CGBitmapContextCreate(dstImage, // pixel data from the region of interest
                                dstWidth, // width of the region of interest
                                dstHeight, // height of the region of interest
                                16, // bits per component
                                2 * dstWidth, // bytes per row
                                colorSpace,
                                kCGImageAlphaNoneSkipLast);

CFRelease(colorSpace);

CGImageRef cgImage = CGBitmapContextCreateImage(context);
CFURLRef url = CFURLCreateWithFileSystemPath(kCFAllocatorDefault,
                                             CFSTR("test.png"),
                                             kCFURLPOSIXPathStyle,
                                             false);
CFStringRef type = kUTTypePNG;
CGImageDestinationRef dest = CGImageDestinationCreateWithURL(url,
                                                             type,
                                                             1,
                                                             0);
CGImageDestinationAddImage(dest,
                           cgImage,
                           0);
CFRelease(cgImage);
CFRelease(context);
CGImageDestinationFinalize(dest);
free(dstImage);

The error I keep receiving is:

CGBitmapContextCreate: unsupported parameter combination: 16 integer bits/component; 32 bits/pixel; 1-component color space; kCGImageAlphaNoneSkipLast; 42 bytes/row.

The ultimate goal is to create an image file from the pixel data in dstImage and save it to the hard drive. Help on this would be greatly appreciated as would insight into how to determine what values I should be using in the CGBitmapContextCreate call.

Thank you

Community
  • 1
  • 1
cj.lange
  • 21
  • 5

1 Answers1

0

First, you should familiarize yourself with the "Supported Pixel Formats" section of Quartz 2D Programming Guide: Graphics Contexts.

If your image data is in an array of float values, then it's 32-bits-per-component, not 16. Therefore, you have to use kCGImageAlphaNone | kCGBitmapFloatComponents.

However, I believe that Core Graphics will interpret floating-point components as being between 0.0 and 1.0. If your values are outside of that, you may need to convert them using something like (value - minimumValue) / (maximumValue - minimumValue). An alternative may be to use CGColorSpaceCreateCalibratedGray() or to create a CGImage using CGImageCreate() and specifying an appropriate decode parameter and then create a bitmap context from that using CGBitmapContextCreateImage().

In fact, if you're not drawing into your bitmap context, you should just be creating a CGImage instead, anyway.

Ken Thomases
  • 88,520
  • 7
  • 116
  • 154
  • Thanks, Ken. This helped me make a step in the right direction. While I now get an output png, it looks rather different from what I'd expect. It appears as though nearly all of the pixels are either black or white. There are only a couple that are some intermediate gray. Second, the png appears very pixelated (more than what happens if I zoom in the original image to put the region of interest to a similar size. – cj.lange Apr 23 '14 at 19:08
  • I ultimately want to be able to draw/write text programmatically on the result image in the end, so if I understand your last statement, it sounds as though 'CGImage' likely wouldn't allow for this. – cj.lange Apr 23 '14 at 19:16
  • Correct, but you can create a `CGImage` first and then create the bitmap context using `CGBitmapContextCreateImage()`. Sorry, I left that part out of my answer by mistake. – Ken Thomases Apr 23 '14 at 21:20
  • OK. I got it to output a PNG using `CGImage` instead of `CGBitmapContextCreateImage()`. BUT, the image looks exactly the same (very pixelated, nearly all pixels are either black or white, and not much resemblance to what I was hoping to create). I tried with interpolation on and off, but that didn't make a difference. Any ideas on this aspect of the problem? – cj.lange Apr 24 '14 at 15:59
  • Did you pass a `decode` array in to `CGImageCreate()` that indicated the range of your component values? – Ken Thomases Apr 24 '14 at 18:30
  • I did not. The example I found just passed `NULL`. I was confused by the documentation because it refers to mapping for RGBA colors, and my source image is grayscale. Could you clarify what (if anything) should be used for the `decode` array in my situation? Thank you – cj.lange Apr 24 '14 at 19:10
  • `CGFloat my_decode[2] = { minimumComponentValue, maximumComponentValue };` Then pass `my_decode` for the `decode` parameter. You have to know what the minimum and maximum values are for your image data. – Ken Thomases Apr 24 '14 at 20:15
  • Got it. I made the change, and it for the most part inverted what I was getting before (it's still not correct), but it's mainly all white pixels with some black pixels mixed in. To confirm, I should be using the normalized values in the range [0, 1] here, correct? – cj.lange Apr 24 '14 at 20:26
  • I will add that when I disable the interpolation on zoom feature in my DICOM viewer, it looks pixelated as well. However, the coloring is still very different between what I am getting as output with my code and what the region looks like zoomed in. – cj.lange Apr 24 '14 at 20:38
  • The decode array is telling Core Graphics about the values in the buffer you're providing. If the buffer contains values in the range [0, 1], then you would pass those values, but in that case it shouldn't be necessary to use a decode array at all. You had said that your values exceeded 255, so clearly they're not in the range [0, 1]. You need to figure out what the range of your values is and pass that in the decode array. – Ken Thomases Apr 24 '14 at 22:13