0

I am writing a scaling algorithm for YUV422 packed format images (without any intermediate conversions to RGB or grayscale or what have you). As can be seen in the below image from MSDN, the 4:2:2 format has 2 Luma bytes for each chroma byte. My test bench involves procuring images from the iSight camera using OpenCV APIs, converting them to YUV (CV_BGR2YUV) and then resizing them. The questions I have are:

  1. I am posting sample data (from OpenCV's Mat's pointer to raw data) for reference below straight from the memory dump, how do I identify by looking at the data as to what is the Y component and what the UV components are? 15 8B 7A 17 8A 7A 18 8A 7B 17 89 7A 19 89 79 19
  2. Is this bilinear interpolation algorithm correct? Let's say, my box is

    TOP ROW: Y00, U00, Y01, V00, Y02, U01, Y03, V01,

    BOTTOM ROW: Y10, U10, Y11, V10, Y12, U11, Y13, V11,

    Result is interpolation of: (Y00, Y01, Y10, Y11), (U00, U01, U10, U11), (Y02, Y03, Y12, Y13), (U00, U01, U10, U11).

That forms my first two YUYV pixels of 32 bits.

Any references to principles of performing bilinear interpolation on YUYV images would be very helpful! Thanks in advance.

See image format

[EDIT]: Please note that the post here is somewhat different, in that it does not discuss the effects of additive operations on the YUV images. It just discards pixels to downsize. Resize (downsize) YUV420sp image

Community
  • 1
  • 1
chimp45
  • 67
  • 7
  • Usually with multi-channel images, you can just interpolate each channel independently, but since `U` and `V` are difference signals I don't know if there are subtle errors this approach would introduce. The conversion from YUV/RGB and back again is simple enough, that would be the most robust way. – Mark Ransom Sep 13 '16 at 19:03
  • Thanks for your comment. I am trying to save processing time by eliminating the intermediate conversion. My concern was about the chroma bytes being difference signals as well. – chimp45 Sep 13 '16 at 19:15

0 Answers0