Goal
- Capture images with Android smartphones attached to moving vehicles
- frequency: 1 Hz
- reference model: Google Pixel 3a
- objects of interest: the road/way in front of the vehicle
- picture usage: as input for machine learning (like RNN) to identify damages on the road/way surfaces
- capture environment: outdoor, only on cloudy days
Current state
- Capturing works (currently using JPEG instead of RAW because of the data size)
- auto-exposure works
- static focus distance works
Challenge
- The surface of the ways/roads in the pictures are often blurry
- The source of the motion blur is mostly from the shaking vehicle/fixed phone
- To reduce the motion blur we want to use a "Shutter Speed Priority Mode"
- i.e. minimize shutter speed => increase ISO (accept increase noise)
- there is only one aperture (f/1.8) available
- there is no "Shutter Speed Priority Mode" (short: Tv/S-Mode) available in the Camera2 API
- the CameraX API does not (yet) offer what we need (static focus, Tv/S Mode)
Steps
- Set the shutter speed to the fastest exposure supported (easy)
- Automatically adjust ISO setting for auto-exposure (e.g. this formular)
- To calculate the ISO the only missing part is the light level (EV)
Question
- How can I estimate the EV continuously during capturing to adjust the ISO automatically while using a fixed shutter speed?
Ideas so far:
- If I could read out the "recommendations" from the Camera2 auto exposure (AE) routine without actually enabling
AE_MODE_ON
then I could easily calculate the EV. However, I did not find an API for this so far. I guess it's not possible without routing the device. - If the ambient light sensor would provide all information needed to auto-expose (calculate EV) this would also be very easy. However, from my understanding it only measures the incident light not the reflected light so the measurement does not take the actual objects in the pictures into account (how their surfaces reflect light)
- If I could get the information from the Pixels of the last captures this would also be doable (if the calculation time fits into the time between two captures). However, from my unterstanding the pixel "bightness" is heavily dependent on the objects captured, i.e. if the brightness of the objects captured changes (many "black horses" or "white bears" at the side of the road/way) I would calculate bad EV values.
- Capture auto-exposed images in-between the actual captures and calculate the light levels from the auto-selected settings used in the in-between captures for the actual captures. This would be a relatively "good" way from my understanding but it's quite hard on the resources end - I am not sure I the time available between two captures is enough for this.
Maybe I don't see a simpler solution. Has anyone done something like this?