I am building a camera app (using camera2 api) that handles three tasks. It previews the current image in a TextureView
as part of a Fragment
. Secondly, it forwards single images to a second instance that allows further processing (using JavaCV or openCV in native JNI) and finally it records and stores the video stream.
I started out with the camera example for the camera2 API that allows saving an image via an ImageReader and extended the feature of processing single images when they become available. Sources: Camera2Basic, ImageReader onAvailable processing
For the recording I've read about how to feed camera data into a MediaCodec
or MediaRecorder
process. The thing that confuses me now is the following:
What is the most efficient way in camera2 to manage memory for these tasks (single image processing, video recording) without copying data too often.
Does the API want developers to add more targets to the previewRequestBuilder as well as surfaces to the capture session or is it preferred to use a single thread pipeline that works on images and stores them in a Bytebuffer
that keeps all frames of the video?
mRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_RECORD);
// This is the output Surface
Surface surface = new Surface(texture);
Surface mImageSurface = mImageReader.getSurface();
// Add the new target to CaptureRequest.Builder
mRequestBuilder.addTarget(surface); // preview in TextureView
mRequestBuilder.addTarget(mImageSurface); // used for image processing
// TODO: need video recording target or usage of
// one target for image processing and video recording?
// Here, we create a CameraCaptureSession for camera preview.
mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageSurface), new CameraCaptureSession.StateCallback(){...});