0

I have a project I'm working on that requires the original RGB values from an NSImage. When downscaling a completely red image (PNG Image) (RGB: 255,0,0) to a size of 200X200, I get slightly different RGB values (RGB: 251, 0, 7). The resizing code and pixel extraction code are down below. I have two questions. Is this expected behavior when resizing an NSImage using the code below? Is it possible to retain the original RGB values of the image (The RGB values that existed before the downscaling)?

Resizing Code (credit):

open func resizeImage(image:NSImage, newSize:NSSize) -> NSImage{

    let rep = NSBitmapImageRep(bitmapDataPlanes: nil, pixelsWide: Int(newSize.width), pixelsHigh: Int(newSize.height), bitsPerSample: 8, samplesPerPixel: 4, hasAlpha: true, isPlanar: false, colorSpaceName: NSCalibratedRGBColorSpace, bytesPerRow: 0, bitsPerPixel: 0)

    rep?.size = newSize

    NSGraphicsContext.saveGraphicsState()
    let bitmap = NSGraphicsContext.init(bitmapImageRep: rep!)
    NSGraphicsContext.setCurrent(bitmap)
    image.draw(in: NSMakeRect(0, 0, newSize.width, newSize.height), from: NSMakeRect(0, 0, image.size.width, image.size.height), operation: .sourceOver, fraction: CGFloat(1))

    let newImage = NSImage(size: newSize)
    newImage.addRepresentation(rep!)
    return newImage

}

The code I used to extract the RGB values from an NSImage is down below:

RGB Extraction Code (credit) :

extension NSImage {

    func pixelData() -> [Pixel] {
        var bmp = self.representations[0] as! NSBitmapImageRep
        var data: UnsafeMutablePointer<UInt8> = bmp.bitmapData!
        var r, g, b, a: UInt8
        var pixels: [Pixel] = []

        NSLog("%d", bmp.pixelsHigh)
        NSLog("%d", bmp.pixelsWide)
        for var row in 0..<bmp.pixelsHigh {
            for var col in 0..<bmp.pixelsWide {
                r = data.pointee
                data = data.advanced(by: 1)
                g = data.pointee
                data =  data.advanced(by: 1)
                b = data.pointee
                data =  data.advanced(by: 1)
                a = data.pointee
                data =  data.advanced(by: 1)
                pixels.append(Pixel(r: r, g: g, b: b, a: a))
            }
        }

        return pixels
    }
}

class Pixel {

    var r: Float!
    var g: Float!
    var b: Float!

    init(r: UInt8, g: UInt8, b: UInt8, a: UInt8) {
        self.r = Float(r)
        self.g = Float(g)
        self.b = Float(b)
    }

}
Community
  • 1
  • 1
Guled
  • 659
  • 1
  • 5
  • 18
  • 1
    You're creating your bitmap image rep using `NSCalibratedRGBColorSpace`. What is the `colorSpaceName` of the original image's rep(s)? The different pixel values presumably represent the same color (as close as achievable) when interpreted through their respective color spaces. – Ken Thomases Dec 27 '16 at 21:06
  • @KenThomases Printing the colorspace from the NSBitMapRep of the original image gives me: sRGB IEC61966-2.1 colorspace – Guled Dec 27 '16 at 21:11
  • And what happens if you use the value of `colorSpaceName` in the call to create the bitmap image rep? – Ken Thomases Dec 27 '16 at 21:13
  • Ok, I think I fixed the problem with the hint in your first comment. Changing the color space to NSDeviceRGBColorSpace gave me the correct values now. :) – Guled Dec 27 '16 at 21:15
  • The colorspace of the resized image was: Generic RGB colorspace. Clearly there was a difference. Thank you so much for your help @KenThomases – Guled Dec 27 '16 at 21:17
  • @KenThomases I have one more problem. I'm getting RGB values of (0,0,0) for, what I believe are, transparent sections of the images (that are resized with the code above) I use. So for example, if I use an image with a transparent background, I'll get the correct RGB values for the actual object in the image, but it seems there are more (0,0,0) values when debugging. How do I only obtain the RGB values and ignore the transparent parts of the image. I'm not sure how to approach this. I have a feeling it has something to do with the `rep` variable in the resizing code. – Guled Dec 27 '16 at 22:38
  • I'm glad switching the color space helped. It's not clear to me what your latest question is about. You might want to open a new question where you can elaborate. A transparent pixel in the source image will have 0 alpha. The values for its r, g, and b components can be anything. When you draw it with `NSCompositingOperation.sourceOver`, those transparent pixels won't affect the destination. The destination will retain their original value. So, two things: 1) if the destination wasn't cleared to transparent, you'll lose transparency; 2) regardless, you'll lose the r, g, b of the source. – Ken Thomases Dec 27 '16 at 23:16
  • Using `NSCompositingOperation.copy` will fix both of those. – Ken Thomases Dec 27 '16 at 23:17
  • I tried `NSCompositingOperation.copy` and it didn't work. After reading the docs it seemed that such a solution would work, but there may be something else wrong that I am not aware of. I elaborated the problem here: http://stackoverflow.com/questions/41353567/why-do-i-get-rgb-values-of-0-0-0-for-an-nsimage-with-a-transparent-background – Guled Dec 27 '16 at 23:46

1 Answers1

0

The issue has been resolved thanks to @KenThomases. When resizing the NSImage I set the colorspace for the NSBitmapImageRep object to NSCalibratedRGBColorSpace. The original NSImage before the downscaling had a different colorspace name then the downscaled image. A simple change in color spaces produced the correct results. NSCalibratedRGBColorSpace was changed to NSDeviceRGBColorSpace.

Community
  • 1
  • 1
Guled
  • 659
  • 1
  • 5
  • 18