How to read a frame from YUV file in OpenCV?
6 Answers
I wrote a very simple python code to read YUV NV21 stream from binary file.
import cv2
import numpy as np
class VideoCaptureYUV:
def __init__(self, filename, size):
self.height, self.width = size
self.frame_len = self.width * self.height * 3 / 2
self.f = open(filename, 'rb')
self.shape = (int(self.height*1.5), self.width)
def read_raw(self):
try:
raw = self.f.read(self.frame_len)
yuv = np.frombuffer(raw, dtype=np.uint8)
yuv = yuv.reshape(self.shape)
except Exception as e:
print str(e)
return False, None
return True, yuv
def read(self):
ret, yuv = self.read_raw()
if not ret:
return ret, yuv
bgr = cv2.cvtColor(yuv, cv2.COLOR_YUV2BGR_NV21)
return ret, bgr
if __name__ == "__main__":
#filename = "data/20171214180916RGB.yuv"
filename = "data/20171214180916IR.yuv"
size = (480, 640)
cap = VideoCaptureYUV(filename, size)
while 1:
ret, frame = cap.read()
if ret:
cv2.imshow("frame", frame)
cv2.waitKey(30)
else:
break

- 111
- 1
- 3
-
For YUV 4:2:2, the frame_len is multiplied with `2` and the shape will become `self.shape = (self.height, self.width, 2)`, also the convert color code also needs to change to one of YUV 422 family codes. https://docs.opencv.org/3.1.0/d7/d1b/group__imgproc__misc.html#ga4e0972be5de079fed4e3a10e24ef5ef0 – biendltb Apr 03 '19 at 20:10
As mentioned, there are MANY types of YUV formats:
To convert to RGB from a YUV format in OpenCV is very simple:
- Create a one-dimensional OpenCV Mat of the appropriate size for that frame data
- Create an empty Mat for the RGB data with the desired dimension AND with 3 channels
- Finally use cvtColor to convert between the two Mats, using the correct conversion flag enum
Here is an example for a YUV buffer in YV12 format:
Mat mYUV(height + height/2, width, CV_8UC1, (void*) frameData);
Mat mRGB(height, width, CV_8UC3);
cvtColor(mYUV, mRGB, CV_YUV2RGB_YV12, 3);
The key trick is to define the dimensions of your RGB Mat before you convert.

- 71
- 2
- 2
-
This is the right answer. I was handling the NV12 variation of YUV and these helped me understand the format: https://wiki.videolan.org/YUV/#NV12, https://commons.wikimedia.org/wiki/File:Common_chroma_subsampling_ratios.svg – rhardih Oct 11 '16 at 13:05
UPDATE there's a newer version of the code here: https://github.com/chelyaev/opencv-yuv
I'm posting some code that will read a single YUV 4:2:0 planar image file. You can directly apply this to most YUV files (just keep reading from the same FILE
object). The exception to this is when dealing with YUV files that have a header (typically, they have a *.y4m
extension). If you want to deal with such files, you have two options:
- Write your own function to consume the header data from the
FILE
object before using the code below - Strip the headers from *.y4m images (using
ffmpeg
or similar tool). This is the option I prefer since it's the simplest.
It also will not work for any other form of YUV format (non-planar, different chroma decimation). As @Stephane pointed out, there are many such formats (and most of them don't have any identifying headers), which is probably why OpenCV doesn't support them out of the box.
But working with them is fairly simple:
- Start with an image and it's dimensions (this is required when reading a YUV file)
- Read luma and chroma into 3 separate images
- Upscale chroma images by a factor of 2 to compensation for chroma decimation. Note that there are actually several ways to compensate for chroma decimation. Upsampling is just the simplest
- Combine into YUV image. If you want RGB, you can use
cvCvtColor
.
Finally, the code:
IplImage *
cvLoadImageYUV(FILE *fin, int w, int h)
{
assert(fin);
IplImage *py = cvCreateImage(cvSize(w, h), IPL_DEPTH_8U, 1);
IplImage *pu = cvCreateImage(cvSize(w/2,h/2), IPL_DEPTH_8U, 1);
IplImage *pv = cvCreateImage(cvSize(w/2,h/2), IPL_DEPTH_8U, 1);
IplImage *pu_big = cvCreateImage(cvSize(w, h), IPL_DEPTH_8U, 1);
IplImage *pv_big = cvCreateImage(cvSize(w, h), IPL_DEPTH_8U, 1);
IplImage *image = cvCreateImage(cvSize(w, h), IPL_DEPTH_8U, 3);
IplImage *result = NULL;
assert(py);
assert(pu);
assert(pv);
assert(pu_big);
assert(pv_big);
assert(image);
for (int i = 0; i < w*h; ++i)
{
int j = fgetc(fin);
if (j < 0)
goto cleanup;
py->imageData[i] = (unsigned char) j;
}
for (int i = 0; i < w*h/4; ++i)
{
int j = fgetc(fin);
if (j < 0)
goto cleanup;
pu->imageData[i] = (unsigned char) j;
}
for (int i = 0; i < w*h/4; ++i)
{
int j = fgetc(fin);
if (j < 0)
goto cleanup;
pv->imageData[i] = (unsigned char) j;
}
cvResize(pu, pu_big, CV_INTER_NN);
cvResize(pv, pv_big, CV_INTER_NN);
cvMerge(py, pu_big, pv_big, NULL, image);
result = image;
cleanup:
cvReleaseImage(&pu);
cvReleaseImage(&pv);
cvReleaseImage(&py);
cvReleaseImage(&pu_big);
cvReleaseImage(&pv_big);
if (result == NULL)
cvReleaseImage(&image);
return result;
}
-
I have the same Problem now, I'm trying to open and work with a video that has UYVY(4:2:2) as codec , I tried you code but it didn't work I know that you mentioned that in your answer but can you tell why ?? thanks in advance for you help – Engine Feb 13 '13 at 10:09
-
1The code I posted handles YUV 4:2:0. Since your video is in YUV 4:2:2 , then my code will definitely not work on your video directly. You will need to adapt the code to handle your format. For more details, see: http://en.wikipedia.org/wiki/Chroma_subsampling#4:2:2 – mpenkov Feb 14 '13 at 04:21
I don't think it is possible to do, at least with the current version. Of course, it wouldn't be that difficult to do, but it is not such an interesting feature, as:
- OpenCV usually works on webcam stream, which are in RGB format, or on coded files, which are directly decoded into RGB for display purposes ;
- OpenCV is dedicated to Computer Vision, where YUV is a less common format than in the Coding community for example ;
- there are a lot of different YUV formats, which would imply a lot of work to implement them.
Conversions are still possible though, using cvCvtColor()
, which means that it is of some interest anyway.

- 2,013
- 3
- 22
- 35
I encountered the same problem. My solution is 1. read one yuv frame (such as I420) to a string object "yuv". 2. convert the yuv frame to BGR24 format. I use libyuv to do it. It is easy to write a python wrapper for libyuv functions. now you get another string object "bgr" with BGR24 format. 3. use numpy.fromstring to get image object from the "bgr" string object. you need to change the shape of the image object.
Below is a simple yuv viewer for your reference.
import cv2
# below is the extension wrapper for libyuv
import yuvtorgb
import numpy as np
f = open('i420_cif.yuv', 'rb')
w = 352
h = 288
size = 352*288*3/2
while True:
try:
yuv = f.read(size)
except:
break
if len(yuv) != size:
f.seek(0, 0)
continue
bgr = yuvtorgb.i420_to_bgr24(yuv, w, h)
img = np.fromstring(bgr, dtype=np.uint8)
img.shape = h,w,3
cv2.imshow('img', img)
if cv2.waitKey(50) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()

- 1
For future reference: I have converted @xianyanlin's brilliant answer to Python 3. The below code works with videos taken from the Raspberry Pi camera and seems to output correct color and aspect ratio.
Warning: it uses the numpy format for specifying resolution of height * width, e.g. 1080 * 1920, 480 * 640.
class VideoCaptureYUV:
def __init__(self, filename, size):
self.height, self.width = size
self.frame_len = self.width * self.height * 3 // 2
self.f = open(filename, 'rb')
self.shape = (int(self.height*1.5), self.width)
def read_raw(self):
try:
raw = self.f.read(self.frame_len)
yuv = np.frombuffer(raw, dtype=np.uint8)
yuv = yuv.reshape(self.shape)
except Exception as e:
print(str(e))
return False, None
return True, yuv
def read(self):
ret, yuv = self.read_raw()
if not ret:
return ret, yuv
bgr = cv2.cvtColor(yuv, cv2.COLOR_YUV2BGR_I420, 3)
return ret, bgr

- 155
- 2
- 11