I have a pipeline that records images on a raspberry pi, converts them to tiff via dcraw
, and then estimates the average intensity for the three channels (RGB).
Since the conversion to tiff is the bottleneck, I wonder if I can skip it. More specifically:
Is it possible to extract average intensity for the three channels directly from the raw image?
Here is an example of an image I am trying to work with (note: it has poor exposure and resolution, so don't expect to see anything interesting): https://drive.google.com/file/d/1FNFGuIAw-948c0loj5y4aZTRJK0TpoWB/view?usp=sharing It was collected as:
raspiraw -md 7 -t 1000 -ts /dev/shm/tstamps.csv -hd0 /dev/shm/hd0.32k -h 32 -w 32 --vinc 1F --fps 1 -sr 1 -o /dev/shm/out.%06d.raw
So far, I tried the following three options:
- Using
PIL
directly, but throws aUnidentifiedImageError
:
from PIL import Image
Image.open('image.raw')
- Using
numpy
directly. This gives me large integer numbers; not sure how they map to the RGB image.
>> np.fromfile('image.raw', dtype=np.uint16)
array([4112, 4112, 4184, ..., 0, 0, 0], dtype=uint16)
- Using
raspiraw
, but unfortunately there is a compatibility issue with more recent versions of Libraw (eg 0.20.2).
from rawkit.raw import Raw
raw_image = Raw('image.raw')
buffered_image = np.array(raw_image.to_buffer())
Image.frombytes('RGB', (raw_image.metadata.width, raw_image.metadata.height), buffered_image)