4

I'm developing an application which requires heavy image processing using camera input and real-time results display. I've decided to use OpenGL and OpenCV along with Android's normal camera API. So far it has become a bit of a multithreading nightmare, and unfortunately I feel very restricted by the lack of documentation on the onPreviewFrame() callback.

I am aware from the documentation that onPreviewFrame() is called on the thread which acquires the camera using Camera.open(). What confuses me is how this callback is scheduled - it seems to be at a fixed framerate. My current architecture relies on the onPreviewFrame() callback to initiate the image processing/display cycle, and it seems to go into deadlock when I block the camera callback thread for too long, so I suspect that the callback is inflexible when it comes to scheduling. I'd like to slow down the framerate to test this, but my device doesn't support this.

I started with the code over at http://maninara.blogspot.ca/2012/09/render-camera-preview-using-opengl-es.html. This code is not very parallel, and it is only meant to display exactly the data which the camera returns. For my needs, I adapted the code to draw bitmaps, and I use a dedicated thread to buffer the camera data to another dedicated heavy-lifting image processing thread (all outside of the OpenGL thread).

Here is my code (simplified):

CameraSurfaceRenderer.java

class CameraSurfaceRenderer implements GLSurfaceView.Renderer, SurfaceTexture.OnFrameAvailableListener,
    Camera.PreviewCallback
{

static int[]                surfaceTexPtr;

static CameraSurfaceView    cameraSurfaceView;
static FloatBuffer          pVertex;
static FloatBuffer          pTexCoord;
static int                  hProgramPointer;

static Camera               camera;
static SurfaceTexture       surfaceTexture;

static Bitmap               procBitmap;
static int[]                procBitmapPtr;

static boolean              updateSurfaceTex = false;

static ConditionVariable    previewFrameLock;
static ConditionVariable    bitmapDrawLock;

// MarkerFinder extends CameraImgProc
static MarkerFinder         markerFinder = new MarkerFinder();
static Thread               previewCallbackThread;

static
{
    previewFrameLock = new ConditionVariable();
    previewFrameLock.open();

    bitmapDrawLock = new ConditionVariable();
    bitmapDrawLock.open();
}

CameraSurfaceRenderer(Context context, CameraSurfaceView view)
{
    rendererContext = context;
    cameraSurfaceView = view;

    // … // Load pVertex and pTexCoord vertex buffers
}

public void close()
{
    // … // This code usually doesn’t have the chance to get called
}

@Override
public void onSurfaceCreated(GL10 unused, EGLConfig config)
{
// .. // Initialize a texture object for the bitmap data

    surfaceTexPtr = new int[1];
    surfaceTexture = new SurfaceTexture(surfaceTexPtr[0]);
    surfaceTexture.setOnFrameAvailableListener(this);

    //Initialize camera on its own thread so preview frame callbacks are processed in parallel
    previewCallbackThread = new Thread()
    {
        @Override
        public void run()
        {
            try {
                camera = Camera.open();
            } catch (RuntimeException e) {
                // … // Bitch to the user through a Toast on the UI thread
            }
            assert camera != null;
            //Callback set on CameraSurfaceRenderer class, but executed on worker thread
            camera.setPreviewCallback(CameraSurfaceRenderer.this);
            try {
                camera.setPreviewTexture(surfaceTexture);
            } catch (IOException e) {
                Log.e(Const.TAG, "Unable to set preview texture");
            }

            Looper.prepare();
            Looper.loop();
        }
    };
    previewCallbackThread.start();

   // … // More OpenGL initialization stuff
}

@Override
public void onDrawFrame(GL10 unused)
{
    GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);

    synchronized (this)
    {
        surfaceTexture.updateTexImage();
    }

// Binds bitmap data to texture
    bindBitmap(procBitmap);

// … // Acquire shader program ttributes, render
    GLES20.glFlush();
}

@Override
public synchronized void onFrameAvailable(SurfaceTexture surfaceTexture)
{
    cameraSurfaceView.requestRender();
}

@Override
public void onPreviewFrame(byte[] data, Camera camera)
{
    Bitmap bitmap = markerFinder.exchangeRawDataForProcessedImg(data, null, camera);

    // … // Check for null bitmap

    previewFrameLock.block();

    procBitmap = bitmap;

    previewFrameLock.close();
    bitmapDrawLock.open();
}

void bindBitmap(Bitmap bitmap)
{
    GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
    GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, procBitmapPtr[0]);

    bitmapDrawLock.block();

    if (bitmap != null && !bitmap.isRecycled())
    {
        GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
        bitmap.recycle();
    }

    bitmapDrawLock.close();
    previewFrameLock.open();
}

@Override
public void onSurfaceChanged(GL10 unused, int width, int height)
{
    GLES20.glViewport(0, 0, width, height);

    // … // Set camera parameters

    camera.startPreview();
}

void deleteTexture()
{
    GLES20.glDeleteTextures(1, surfaceTexPtr, 0);
}
}

CameraImgProc.java (abstract class)

public abstract class CameraImgProc
{
CameraImgProcThread  thread = new CameraImgProcThread();
Handler              handler;
ConditionVariable    bufferSwapLock = new ConditionVariable(true);
Runnable             processTask = new Runnable()
{
    @Override
    public void run()
    {
        imgProcBitmap = processImg(lastWidth, lastHeight, cameraDataBuffer, imgProcBitmap);
        bufferSwapLock.open();
    }
};

int lastWidth    = 0;
int lastHeight   = 0;

Mat cameraDataBuffer;
Bitmap imgProcBitmap;

public CameraImgProc()
{
    thread.start();
    handler = thread.getHandler();
}

protected abstract Bitmap allocateBitmapBuffer(int width, int height);

public final Bitmap exchangeRawDataForProcessedImg(byte[] data, Bitmap dirtyBuffer, Camera camera)
{
    Camera.Parameters parameters = camera.getParameters();
    Camera.Size size = parameters.getPreviewSize();

    // Wait for worker thread to finish processing image
    bufferSwapLock.block();
    bufferSwapLock.close();

    Bitmap freshBuffer = imgProcBitmap;
    imgProcBitmap = dirtyBuffer;

    // Reallocate buffers if size changes to avoid overflow
    assert size != null;
    if (lastWidth != size.width || lastHeight != size.height)
    {
        lastHeight  = size.height;
        lastWidth   = size.width;

        if (cameraDataBuffer != null) cameraDataBuffer.release();
        //YUV format requires 1.5 times as much information in vertical direction
        cameraDataBuffer = new Mat((lastHeight * 3) / 2, lastWidth, CvType.CV_8UC1);

        imgProcBitmap = allocateBitmapBuffer(lastWidth, lastHeight);
        // Buffers had to be resized, therefore no processed data to return

        cameraDataBuffer.put(0, 0, data);

        handler.post(processTask);
        return null;
    }

    // If program did not pass a buffer
    if (imgProcBitmap == null)
        imgProcBitmap = allocateBitmapBuffer(lastWidth, lastHeight);

    // Exchange data
    cameraDataBuffer.put(0, 0, data);

    // Give img processing task to worker thread
    handler.post(processTask);

    return freshBuffer;
}

protected abstract Bitmap processImg(int width, int height, Mat cameraData, Bitmap dirtyBuffer);

class CameraImgProcThread extends Thread
{
    volatile Handler handler;

    @Override
    public void run()
    {
        Looper.prepare();
        handler = new Handler();
        Looper.loop();
    }

    Handler getHandler()
    {
        //noinspection StatementWithEmptyBody
        while (handler == null)
        {
            try {
                Thread.currentThread();
                Thread.sleep(5);
            } catch (Exception e) {
                //Do nothing
            }
        };
        return handler;
    }
}
}

I want an application which is robust, no matter how long it takes for the CameraImgProc.processImg() function to finish. Unfortunately, the only possible solution when camera frames are being fed in at a fixed rate is to drop frames when the image processing hasn't finished yet, or else I'll quickly have a buffer overflow.

My questions are as follows:

Is there any way to slow down the Camera.PreviewCallback frequency on demand?

Is there an existing Android API for getting frames on demand from the camera?

Are there existing solutions to this problem which I can refer to?

genpfault
  • 51,148
  • 11
  • 85
  • 139
Boston Walker
  • 534
  • 1
  • 6
  • 19
  • 1
    The "Show + capture camera" activity in Grafika (https://github.com/google/grafika) performs image processing on incoming preview frames with a GLES vertex shader. You can see a demo on youtube (http://www.youtube.com/watch?v=kH9kCP2T5Gg). This may not be a viable approach for "heavy" image processing, because shaders are a pain to work with. – fadden Feb 23 '14 at 17:13
  • @Fadden: I wonder, how good is the read pixels performance on an average android device. This alone could make offloading the work to GPU irrelevant. – Alex Cohn Feb 23 '14 at 18:53
  • 1
    `glReadPixels()` performance varies significantly between devices and different releases of Android. For a 720p frame, most Nexus devices will take 6-10ms, but I've seen it take 170ms. (Grafika includes a trivial `glReadPixels()` benchmark for this reason.) Offloading to the GPU is useful because some operations, like the 3x3 convolution filter used in the example, are the sort of thing GPUs do very well. Paths that don't touch the pixels with the CPU tend to be more efficient in general because they can avoid copying buffers of data around. – fadden Feb 23 '14 at 19:36

2 Answers2

8

onPreviewFrame() is called on the thread which acquires the camera using Camera.open()

That's a common misunderstanding. The key word that is missing from this description is "event". To schedule the camera callbacks to a non-UI thread, you need and "event thread", a synonym of HandlerThread. Please see my explanation and sample elsewhere on SO. Well, using a usual thread to open camera as in your code, is not useless, because this call itself may take few hundred milli on some devices, but event thread is much, much better.

Now let me address your questions: no, you cannot control the schedule of camera callbacks.

You can use setOneShotPreviewCallback() if you want to receive callbacks at 1 FPS or less. Your milage may vary, and it depends on the device, but I would recommend to use setPreviewCallbackWithBuffer and simply return from onPreviewFrame() if you want to check the camera more often. Performance hit from these void callbacks is minor.

Note that even when you offload the callbacks to a background thread, they are blocking: if it takes 200 ms to process a preview frame, camera will wait. Therefore, I usually send the byte[] to a working thread, and quickly release the callback thread. I won't recommend to slow down the flow of preview callbacks by processing them in blocking mode, because after you release the thread, the next callback will deliver a frame with undefined timestamp. Maybe it will be a fresh one, or maybe it will be one buffered a while ago.

Community
  • 1
  • 1
Alex Cohn
  • 56,089
  • 9
  • 113
  • 307
  • 2
    Thanks for clarifying the otherwise confusing use of the term "event thread" in the Android documentation. Note that even with a buffering thread allowing onPreviewFrame() to return earlier, the application will still have to drop frames if it can't process them fast enough, so I have opted to simply add a timeout to the call to previewFrameLock.block() to allow the application to drop frames if it needs to, while also allowing the callback to return. I have also opted to implement setPreviewCallbackWithBuffer, but this is really no more than a memory usage improvement. – Boston Walker Feb 23 '14 at 17:47
  • 1
    Your logic seems correct. It is important to handle timeout in onPreviewFrame(), because otherwise you may get outdated frames for processing. setPreviewCallbackWithBuffer() is more that memory optimization, it may dramatically reduce garbage collection incidents, which are the main cause for random, unpredictable delays in camera preview flow. – Alex Cohn Feb 23 '14 at 18:50
1

You can schedule the callback in later platform releases (>4.0) indirectly. You can setup the buffers that the callback will use to deliver the data. Typically you setup two buffers; one to be written by the camera HAL while you read from the other one. No new frame will be delivered to you (by calling your onPreviewFrame) until you return a buffer that the camera can write to. It also means that the camera will drop frames.

Birkler
  • 11
  • 1