0

I have a pipeline that records images on a raspberry pi, converts them to tiff via dcraw, and then estimates the average intensity for the three channels (RGB).

Since the conversion to tiff is the bottleneck, I wonder if I can skip it. More specifically:

Is it possible to extract average intensity for the three channels directly from the raw image?

Here is an example of an image I am trying to work with (note: it has poor exposure and resolution, so don't expect to see anything interesting): https://drive.google.com/file/d/1FNFGuIAw-948c0loj5y4aZTRJK0TpoWB/view?usp=sharing It was collected as:

raspiraw -md 7 -t 1000 -ts /dev/shm/tstamps.csv -hd0 /dev/shm/hd0.32k -h 32 -w 32 --vinc 1F --fps 1 -sr 1 -o /dev/shm/out.%06d.raw

So far, I tried the following three options:

  1. Using PIL directly, but throws a UnidentifiedImageError :
from PIL import Image
Image.open('image.raw')
  1. Using numpy directly. This gives me large integer numbers; not sure how they map to the RGB image.
>> np.fromfile('image.raw', dtype=np.uint16)
array([4112, 4112, 4184, ...,    0,    0,    0], dtype=uint16)
  1. Using raspiraw, but unfortunately there is a compatibility issue with more recent versions of Libraw (eg 0.20.2).
from rawkit.raw import Raw
raw_image = Raw('image.raw')
buffered_image = np.array(raw_image.to_buffer())
Image.frombytes('RGB', (raw_image.metadata.width, raw_image.metadata.height), buffered_image)
joeDiHare
  • 619
  • 8
  • 19
  • It is unclear what your shared file is or how it was recorded - please clarify. At 6kB it seems unlikely to be much of an image. – Mark Setchell May 06 '22 at 06:12
  • Hi @MarkSetchell, I added the `raspiraw` command to generate the image. It's a tiny 32x32 image with low resolution/exposure. The actual content of the image does not matter, it's more about being able to take the average intensity of R, G and B from *any* raw image in the fastest way possible – joeDiHare May 06 '22 at 18:34

1 Answers1

1

I haven't used raspiraw myself but I think you have some fundamental issues with your approach based on the documentation here.

Firstly, PIL isn't going to have a clue about raw images, so you can forget your approach 1). Also even if you can read the data with Numpy, it has no idea about image processing in general - least of all about debayering raw images. So that leaves you with your third approach...

The document I linked to above says that writing the Broadcom header on each image slows things down too much, so you need to take the one-off header from /dev/shm/hd0.32k and prepend that onto the start of any of your frames. So, in effect you need to do:

cat /dev/shm/hd0.32k /dev/shm/out.000001.raw > frame1.raw

then try your option 3 code reading the combined file frame1.raw. You also seem to know about some incompatibility but, for some reason, you chose not to identify the source of that knowledge so nobody much is likely to be able to help...

Mark Setchell
  • 191,897
  • 31
  • 273
  • 432