2

My input is a PIL.Image.Image with mode RGB or RGBA, and I need to fill a numpy.ndarray with 3 float values calculated from the RGB values of each pixel. The output array should be indexable by the pixel coordinates. I have found the following way to do it:

import numpy as np
from PIL import Image

def generate_ycbcr(img: Image.Image):
    for r, g, b in img.getdata():
        yield 0.299 * r + 0.587 * g + 0.114 * b
        yield 128 - 0.168736 * r - 0.331264 * g + 0.5 * b
        yield 128 + 0.5 * r - 0.418688 * g - 0.081312 * b

def get_ycbcr_arr(img: Image.Image):
    width, height = img.size
    arr = np.fromiter(generate_ycbcr(img), float, height * width * 3)
    return arr.reshape(height, width, 3)

It works, but I suspect there is a better and/or faster way. Please tell me if there is one, but also if there is not.

N.B.: I know I can convert() the image to YCbCr, and then fill a numpy.array from that, but the conversion is rounded to integer values, which is not what I need.

Walter Tross
  • 12,237
  • 2
  • 40
  • 64

1 Answers1

2

For starters, you can convert an image directly to a numpy array and use vectorized operations to do what you want:

def get_ycbcr_vectorized(img: Image.Image):
    R,G,B = np.array(img).transpose(2,0,1)[:3] # ignore alpha if present
    Y = 0.299 * R + 0.587 * G + 0.114 * B
    Cb = 128 - 0.168736 * R - 0.331264 * G + 0.5 * B
    Cr = 128 + 0.5 * R - 0.418688 * G - 0.081312 * B
    return np.array([Y,Cb,Cr]).transpose(1,2,0)

print(np.array_equal(get_ycbcr_arr(img), get_ycbcr_vectorized(img))) # True

However, are you sure that directly converting to 'YCbCr' will be that much different? I tested the conversion defined in the above function:

import matplotlib.pyplot as plt
def aux():
    # generate every integer R/G/B combination
    R,G,B = np.ogrid[:256,:256,:256]
    Y = 0.299 * R + 0.587 * G + 0.114 * B
    Cb = 128 - 0.168736 * R - 0.331264 * G + 0.5 * B
    Cr = 128 + 0.5 * R - 0.418688 * G - 0.081312 * B

    # plot the maximum error along one of the RGB channels
    for arr,label in zip([Y,Cb,Cr], ['Y', 'Cb', 'Cr']):
        plt.figure()
        plt.imshow((arr - arr.round()).max(-1))
        plt.xlabel('R')
        plt.ylabel('G')
        plt.title(f'max_B ({label} - {label}.round())')
        plt.colorbar()

aux()   
plt.show()

The results suggest that the largest absolute error is 0.5, although these errors happen all over the pixels:

RGB -> Y error RGB -> Cb error RGB -> Cr error

So yeah, this could be a large-ish relative error, but this isn't necessarily a huge issue.

In case the built-in conversion suffices:

arr = np.array(img.convert('YCbCr'))

is all you need.