1

This question was asked but never answered here -- but it is somewhat different than my need, anyway.

I want to record video, while running the Google Vision library in the background, so whenever my user holds up a barcode (or approaches one closely enough) the camera will automatically detect and scan the barcode -- and all the while it is recording the video. I know the Google Vision demo is pretty CPU intensive, but when I try a simpler version of it (i.e. without grabbing every frame all the time and handing it to the detector) I'm not getting reliable barcode reads.

(I am running a Samsung Galaxy S4 Mini on KitKat 4.4.3 Unfortunately, for reasons known only to Samsung, they no longer report the OnCameraFocused event, so it is impossible to know if the camera grabbed the focus and call the barcode read then. That makes grabbing and checking every frame seem like the only viable solution.)

So to at least prove the concept, I wanted to simply modify the Google Vision Demo. (Found Here)

It seems the easiest thing to do is simply jump in the code and add a media recorder. I did this in the CameraSourcePreview method during surface create.

Like this:

private class SurfaceCallback implements SurfaceHolder.Callback
{
    @Override
    public void surfaceCreated(SurfaceHolder surface)
    {
        mSurfaceAvailable = true;
        try
        {
            startIfReady();
            if (mSurfaceAvailable)
            {
                Camera camera = mCameraSource.getCameraSourceCamera();
                /** ADD MediaRecorder to Google Example  **/
                if (camera != null && recordThis)
                {
                    if (mMediaRecorder == null)
                    {
                        mMediaRecorder = new MediaRecorder();
                        camera.unlock();
                        SurfaceHolder sh = mSurfaceView.getHolder();
                        mMediaRecorder.setPreviewDisplay(sh.getSurface());
                        mMediaRecorder.setCamera(camera);
                        mMediaRecorder.setAudioSource(MediaRecorder.AudioSource.CAMCORDER);
                        mMediaRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA);
                        mMediaRecorder.setProfile(CamcorderProfile.get(CamcorderProfile.QUALITY_HIGH));
                        String OutputFile = Environment.getExternalStorageDirectory() + "/" +
                                DateFormat.format("yyyy-MM-dd_kk-mm-ss", new Date().getTime()) + ".mp4";
                        File newOutPut = getVideoFile();
                        String newOutPutFileName = newOutPut.getPath();
                        mMediaRecorder.setOutputFile(newOutPutFileName);
                        Log.d("START MR", OutputFile);
                        try { mMediaRecorder.prepare(); } catch (Exception e) {}
                        mCameraSource.mediaRecorder = mMediaRecorder;
                        mMediaRecorder.start();
                    }
                }
            }
        }
        catch (SecurityException se)
        {
            Log.e(TAG, "Do not have permission to start the camera", se);
        }
        catch (IOException e)
        {
            Log.e(TAG, "Could not start camera source.", e);
        }
    }

That DOES record things, while still handing each frame off to the Vision code. But, strangely, when I do that, the camera does not seem to call autofocus correctly, and the barcodes are not scanned -- since they are never really in focus, and therefore not recognized.

My next thought was to simply capture the frames as the barcode detector is handling the frames, and save them to the disk one by one (I can mux them together later.)

I did this in CameraSource.java.

This does not seem to be capturing all of the frames, even though I am writing them out in a separate AsyncTask running in the background, which I thought would get them eventually -- even if it took awhile to catch up. The saving was not optimized, but it looks as though it is dropping frames throughout, not just at the end.

To add this code, I tried putting it in the private class FrameProcessingRunnable in the run() method.

Right after the FrameBuilder Code, I added this:

            if (saveImagesIsEnabled)
            {
                if (data == null)
                {
                    Log.d(TAG, "data == NULL");
                }
                else
                {
                    SaveImageAsync saveImage = new SaveImageAsync(mCamera.getParameters().getPreviewSize() );
                    saveImage.execute(data.array());
                }
            }

Which calls this class:

Camera.Size lastKnownPreviewSize = null;
public class SaveImageAsync extends AsyncTask<byte[], Void, Void>
{
    Camera.Size previewSize;

    public SaveImageAsync(Camera.Size _previewSize)
    {
        previewSize = _previewSize;
        lastKnownPreviewSize = _previewSize;
    }
    @Override
    protected Void doInBackground(byte[]... dataArray)
    {
        try
        {
            if (previewSize == null)
            {
                if (lastKnownPreviewSize != null)
                    previewSize = lastKnownPreviewSize;
                else
                    return null;
            }
            byte[] bitmapData = dataArray[0];
            if (bitmapData == null)
            {
                Log.d("doInBackground","NULL: ");
                return null;
            }
                // where to put the output file (note: /sdcard requires WRITE_EXTERNAL_STORAGE permission)
                File storageDir = Environment.getExternalStorageDirectory();
                String imageFileName = baseFileName + "_" +  Long.toString(sequentialCount++) + ".jpg";
                String filePath = storageDir + "/" + "tmp" + "/" + imageFileName;
                FileOutputStream out = null;
                YuvImage yuvimage = new YuvImage(bitmapData, ImageFormat.NV21, previewSize.width,
                        previewSize.height, null);
            try
            {
                out = new FileOutputStream(filePath);
                yuvimage.compressToJpeg(new Rect(0, 0, previewSize.width,
                        previewSize.height), 100, out);
            }
            catch (Exception e)
            {
                e.printStackTrace();
            }
            finally
            {
                try
                {
                    if (out != null)
                    {
                        out.close();
                    }
                }
                catch (IOException e)
                {
                    e.printStackTrace();
                }
            }
        }
        catch (Exception ex)
        {
            ex.printStackTrace();
            Log.d("doInBackground", ex.getMessage());
        }
        return null;
    }
}

I'm OK with the mediarecorder idea, or the brute force frame capture idea, but neither seem to be working correctly.

Community
  • 1
  • 1
MarkJoel60
  • 537
  • 2
  • 6
  • 24
  • More inforation after playing with this some more: Part of the issue is I am running on KitKat and a Samsung Galaxy S4. Samsung, for reasons I can't explain, took the Autofocus callback away from the S4. Google, in their sample code, fakes this behavior. I believe the reason it "disappears" when the MediaRecorder is called is the MediaRecorder has its own Camera, and that doesn't do the AutoFocus "fake-out" – MarkJoel60 Sep 07 '16 at 00:38

0 Answers0