0

I'm trying to convert BGR to YUV with cvCvtColor method AND then get reference to each component. The source image (IplImage1) has following parameters:

  1. depth = 8
  2. nChannels = 3
  3. colorModel = RGB
  4. channelSeq = BGR
  5. width = 1620
  6. height = 1220

Convert and get the components after conversion:

IplImage* yuvImage = cvCreateImage(cvSize(1620, 1220), 8, 3);
cvCvtColor(IplImage1, yuvImage, CV_BGR2YCrCb);
yPtr = yuvImage->imageData;
uPtr = yPtr + height*width;
vPtr = uPtr + height*width/4;

I have method that converts the YUV back to RGB and saves to file. When I create the YUV components manually (I create blue image) it works and when I open the image it's really blue. But, when I create YUV components using the method above I get black image. I think that maybe I get reference to YUV components wrongly

yPtr = yuvImage->imageData;
uPtr = yPtr + height*width;
vPtr = uPtr + height*width/4;

What could be the problem?

theateist
  • 13,879
  • 17
  • 69
  • 109
  • 1
    please *don't* use IplImages, avoid the outdated c-api. they moved to c++ in 2010 already, and so should you. please use cv::Mat , the cv:: namespace. – berak Jun 22 '14 at 08:03
  • @berak, I will after I understand what is the problem. – theateist Jun 22 '14 at 08:05
  • 1
    the channels are interleaved, not consecutive. to get single components, you will have to split() the image. also, try to avoid direct access of the underlying imageData pointer – berak Jun 22 '14 at 08:13
  • 1
    See this answer: http://stackoverflow.com/questions/24341114/simple-illumination-correction-in-images-opencv-c/24341809#24341809. It shows how to split a Mat. It is for Lab, but just use CV_BGR2YCrCb instead of CV_BGR2Lab. – Bull Jun 22 '14 at 08:24

2 Answers2

1

If you really must use IplImage (e.g. in legacy code, or C) then use cvSplit

IplImage* IplImage1 = something;
IplImage* ycrcbImage = cvCreateImage(cvSize(1620, 1220), 8, 3);
cvCvtColor(IplImage1, ycrcbImage, CV_BGR2YCrCb);

IplImage* yImage  = cvCreateImage(cvSize(1620, 1220), 8, 1);
IplImage* crImage = cvCreateImage(cvSize(1620, 1220), 8, 1);
IplImage* cbImage = cvCreateImage(cvSize(1620, 1220), 8, 1);
cvSplit(ycrcbImage, yImage, crImage , cbImage, 0);

The modern approach would be to avoid the legacy API and use Mats:

cv::Mat matImage1(IplImage1);
cv::Mat ycrcb_image;
cv::cvtColor(matImage1, ycrcb_image, CV_BGR2YCrCb);

// Extract the Y, Cr and Cb channels into separate Mats
std::vector<cv::Mat> planes(3);
cv::split(ycrcb_image, planes);
// Now you have the Y image in planes[0],
// the Cr image in planes[1],
// and the Cb image in planes[2]

cv::Mat Y = planes[0]; // if you want
Bull
  • 11,771
  • 9
  • 42
  • 53
  • Is using 'cv::split' is the fastest way to get the components? I just use NVENC to compress the video in real time. The NVENC uses YUV format as input but my frames are in RGB. So, I need to convert very fast RGB to YUV and pass its somponents to NVENC API – theateist Jun 22 '14 at 10:21
  • I don't have `cv::cvtColor` function. I have only `cvCvtColor`. But, I do have both `cvSplit` and `cv::split` function. – theateist Jun 22 '14 at 14:51
  • @theateist, it is not possible to have `cvCvtColor` and not have `cv::cvtColor, because the implementation of the former merely calls the latter. Your problem is almost certainly to do with include files. You need to make sure thae `imgproc.hpp` ultimately gets included. E.g. `#include "opencv2/opencv.hpp"` will pick it up, or : `#include "opencv2/imgproc/imgproc.hpp"` – Bull Jun 22 '14 at 15:11
  • Regarding speed, if you don't need all three channels from split, it would be faster to use `mixchannels()` http://docs.opencv.org/modules/core/doc/operations_on_arrays.html#mixchannels . You would need to time cv::split to see if it is likely to be a bottleneck. cv::split has to handle anything you pass it so ,if necessary, you can surely write something faster that only handles your situation. Look at the source in convert.cpp. You might try to implement a version without NAryMatIterator e.g. – Bull Jun 22 '14 at 15:36
0

While RGB represents color as red, green and blue; the YCbCr color model represents color as brightness and two color difference signals. In YCbCr, the Y is the brightness (luma), Cb is blue minus luma (B-Y) and Cr is red minus luma (R-Y).

Here is the code for the same in case you are using OpenCV 3.0.0 :

import numpy as np
import cv2

#Obtaining and displaying the image

x = 'C:/Users/524316/Desktop/car.jpg'
img = cv2.imread(x, 1)
cv2.imshow("img",img)

#converting to YCrCb color space

YCrCb = cv2.cvtColor(a, cv2.COLOR_BGR2YCrCb)
cv2.imshow("YCrCb",YCrCb)

#splitting the channels individually

Y, Cr, Cb = cv2.split(YCrCb)

cv2.imshow('Y_channel', Y)
cv2.imshow('Cr_channel', Cr)
cv2.imshow('Cb_channel', Cb)

cv2.waitKey(0)
cv2.destroyAllWindows()

Original image:

enter image description here

YCrCb image :

enter image description here

Y - Channel :

It is the same as grayscale image

enter image description here

Cr - channel :

enter image description here

Cb - channel :

enter image description here

Jeru Luke
  • 20,118
  • 13
  • 80
  • 87