What is the recommended way to extract the actual pixels represented by the Face
objects detected through the Android gms.vision.face.*
packages?
I am working with the FaceTracker
sample code from here for retrieving video frames from the device camera, and running them through face detection.
The pipeline attaches a Face Detector
to a CameraSource
which receives Frame
s from a SurfaceView
object. The Face Detector creates Face
objects for each detected face. However, as far as I can tell, Face
objects do not hold the underlying pixels that the face encompasses. I would like to store the detected face for future verification.
One possible solution (as far as I can tell) would be to receive a frame from the SurfaceView
, hold on to that buffer, call the face detection on the single Frame
, then use the Face
objects returned to extract the pixels. However, I do not know enough about the implementation details to guess at the possible overhead of making a call with a single frame (model initialization, etc.). I'm re-familiarizing myself with Java after 10 years away, which means I'm also a bit slow grasping the code I'm reading :)
Any preferred solutions out there?