1

I have created a custom UVC camera, which can streaming a 10 bit raw RGB(datasheet said) sensor's image. But i had to pack the 10 bit signal into 16 bit packets, and write the descriptors as a YUY2 media(UVC not support raw format). Now I have video feed(opened it witm amcap,vlc, custom opencv app). The video is noisy and purple. I started to process the data with openCV and read bunch of posts about the problem, but now I am bit confused how to solve the problem. I would love to learn more about the image formats and processing, but now a bit overhelmed the amount of information and need some guidance. Also based on the sensor datasheet it is a BGGR bayer grid, and the similar posts describe the problem as a greenish noisy picture, but i have purple pictures.

purple image from the camera

purple image from the camera

UPDATE: I used the mentioned post post for get proper 16 bit one channel image (gray scale), but I am not able to demosaicing the image properly.

import cv2
import numpy as np
# open video0
cap = cv2.VideoCapture(1, cv2.CAP_MSMF)
# set width and height
cols, rows = 400, 400,
cap.set(cv2.CAP_PROP_FRAME_WIDTH, cols)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, rows)
cap.set(cv2.CAP_PROP_FPS, 30)
cap.set(cv2.CAP_PROP_CONVERT_RGB, 0)

# Fetch undecoded RAW video streams
cap.set(cv2.CAP_PROP_FORMAT, -1)  # Format of the Mat objects. Set value -1 to fetch undecoded RAW video streams (as Mat 8UC1)
while True:
    # Capture frame-by-frame
    ret, frame = cap.read()#read into np array with [1,320000] h*w*2 byte
    #print(frame.shape)
    if not ret:
        break
    # Convert the frame from uint8 elements to big-endian signed int16 format.
    frame = frame.reshape(rows, cols*2) # Reshape to 800*400
    frame = frame.astype(np.uint16) # Convert uint8 elements to uint16 elements
    frame = (frame[:, 0::2] << 8) + frame[:, 1::2]  # Convert from little endian to big endian (apply byte swap), the result is 340x240.
    frame = frame.view(np.int16)

    # Apply some processing for disapply (this part is just "cosmetics"):
    frame_roi = frame[:, 10:-10]  # Crop 320x240 (the left and right parts are not meant to be displayed).
    # frame_roi = cv2.medianBlur(frame_roi, 3)  # Clean the dead pixels (just for better viewing the image).
    frame_roi = frame_roi << 6  # shift the 6 most left bits
    normed = cv2.normalize(frame_roi, None, 0, 255, cv2.NORM_MINMAX, cv2.CV_8UC3)  # Convert to uint8 with normalizing (just for viewing the image).
    gray = cv2.cvtColor(normed, cv2.COLOR_BAYER_GR2BGR)
    cv2.imshow('normed', normed)  # Show the normalized video frame
    cv2.imshow('rgb', gray)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break
    cv2.imwrite('normed.png', normed)
    cv2.imwrite('colored.png', gray)

cap.release()
cv2.destroyAllWindows()

from this:

enter image description here

i got this:

enter image description here

SECOND UPDATE:

To get more relevant informations about the image status I took some pictures with a different target(another devboard with a camera module, both of the should be blue and the PCB shoulb be orangeish), I repeated this with the test pattern of the camera. I took pictures after every step of the script: frame.reshaped(row, cols*2) camera target frame.reshaped(row, cols*2)

frame.reshaped(row, cols*2) test pattern frame.reshaped(row, cols*2)tp

frame.astype(np.uint16) camera target enter image description here

frame.astype(np.uint16) test pattern enter image description here

frame.view(np.int16) camera target enter image description here

frame.view(np.int16) test pattern enter image description here

cv2.normalize camera target enter image description here

cv2.normalize test pattern enter image description here

cv2.COLOR_BAYER_GR2BGR camera target enter image description here

cv2.COLOR_BAYER_GR2BGR test pattern enter image description here

On the bottom and top of the camera target pictures there a pink wrap foil for protect the camera(looks green on the picture). The vendor did not provide me the documentation of the sensor, so i do not know how should look like the proper test pattern, but I am sure that one not correct.

Dan Bonachea
  • 2,408
  • 5
  • 16
  • 31
LB91
  • 21
  • 6
  • please present how exactly you did that. please don't call pixels "packets". – Christoph Rackwitz Mar 16 '22 at 16:00
  • Note: what sensor do (in pixel) is not so much relevant on the output. Sensors are often linear, but most video format are "gamma corrected". And note: raw video formats have often colour casts when you do not use camera profile/primaries (every sensor is special, and colour management is done later: raw: you want raw data from sensor without any additional mangling) – Giacomo Catenazzi Mar 16 '22 at 16:07
  • In case you want us to help you processing the image, we must have access to a raw video frame (2 bytes per pixel). You may follow the same procedure as described in [this post](https://stackoverflow.com/questions/70718890/how-to-retrieve-raw-data-from-yuv2-streaming). Make the required adjustments to the specific resolution (note: the example is Windows oriented). In case the image exceeds the site size limit, find a way to share one raw video frame. Add to your post, the code used for grabbing. – Rotem Mar 16 '22 at 20:56
  • Thank you for your comments. With the post you mentioned i was able to shift the bits and got a proper image. But in the code from the post the image is rendered into a 1 channel image, and when i try to recreate the colors i always got fake colors. i am still try to understand what happening and why, so if have any other source where i could learn more about a topic i would be grateful. – LB91 Mar 18 '22 at 14:37
  • In case you want to address me, start the comment with @rotem. The BGGR raw format is actually read as single color channel. The conversion to 3 color channels is done by [Demosaicing](https://en.wikipedia.org/wiki/Demosaicing) algorithm. I may be able to help you if you share a 16 bit (single channel) raw image. Please edit your post - add the code used for grabbing, and the code used for conversion to 3 color channels. – Rotem Mar 18 '22 at 21:54
  • @rotem Thank you for you response, i edited my question based on your guides. – LB91 Mar 21 '22 at 16:23
  • I am getting [this image](https://i.stack.imgur.com/025Te.png). Can you post the `frame` after this line `frame = frame.reshape(rows, cols*2)` as PNG image? You know you can save the image using `cv2.imwrite('frame.png', frame)` right? Please take a picture of something more meaningful, so we can check the correctness of the colors. – Rotem Mar 21 '22 at 17:17
  • @rotem I updated the post with a more relevant atrget and test pattern pictures. I tried using different type of the opencv based demoasicing functions, but none of them looked the proper way, or maybe I missed something? – LB91 Mar 22 '22 at 09:33
  • I think `cv2.COLOR_BAYER_RG2BGR` gives the best result. The frame `frame.reshaped(row, cols*2)` doesn't make sense. The pattern range is [0, 255] (8 bits) instead of [0, 1023] (10 bits), and it also doesn't make sense. – Rotem Mar 22 '22 at 14:48
  • @rotem As much I understand the code, reshape needed to create 400*800 2 D array from a linear 320000 long array. And because the 8 bit representation the length had to be double to handle the 16 (2*8 bit) data, than casted it inti 16 bit data and a 400*400 2D array. RG2BGR still not enough and I also have to implement this code in C++, so I continue the search for the solution. But thank you for your help. – LB91 Mar 22 '22 at 15:52

0 Answers0