0

I am trying face detection and adding mask(graphic overlay) using google vision api ,the problem is i could not get the ouptut from camera after detecting and adding mask.so far I have tried this solution from github , https://github.com/googlesamples/android-vision/issues/24 ,based on this issue i have added a custom detector class, Mobile Vision API - concatenate new detector object to continue frame processing . and added this on mydetector class How to create Bitmap from grayscaled byte buffer image? .

MyDetectorClass

class MyFaceDetector extends Detector<Face> 
{
    private Detector<Face> mDelegate;

    MyFaceDetector(Detector<Face> delegate) {
        mDelegate = delegate;
    }

    public SparseArray<Face> detect(Frame frame) {
        // *** add your custom frame processing code here
        ByteBuffer byteBuffer = frame.getGrayscaleImageData();
        byte[] bytes = byteBuffer.array();
        int w = frame.getMetadata().getWidth();
        int h = frame.getMetadata().getHeight();
        YuvImage yuvimage=new YuvImage(bytes, ImageFormat.NV21, w, h, null);
        ByteArrayOutputStream baos = new ByteArrayOutputStream();
        yuvimage.compressToJpeg(new Rect(0, 0, w, h), 100, baos); // Where 100 is the quality of the generated jpeg
        byte[] jpegArray = baos.toByteArray();
        Bitmap bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);
        Log.e("got bitmap","bitmap val " + bitmap);
        return mDelegate.detect(frame);
    }

    public boolean isOperational() {
        return mDelegate.isOperational();
    }

    public boolean setFocus(int id) {
        return mDelegate.setFocus(id);
    }
}

frame processing

public SparseArray<Face> detect(Frame frame) 
{
    // *** add your custom frame processing code here
    ByteBuffer byteBuffer = frame.getGrayscaleImageData();
    byte[] bytes = byteBuffer.array();
    int w = frame.getMetadata().getWidth();
    int h = frame.getMetadata().getHeight();
    YuvImage yuvimage=new YuvImage(bytes, ImageFormat.NV21, w, h, null);
    ByteArrayOutputStream baos = new ByteArrayOutputStream();
    yuvimage.compressToJpeg(new Rect(0, 0, w, h), 100, baos); // Where 100 is the quality of the generated jpeg
    byte[] jpegArray = baos.toByteArray();
    Bitmap bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);
    Log.e("got bitmap","bitmap val " + bitmap);
    return mDelegate.detect(frame);
}

i am getting a rotated bitmap ,that is without the mask (graphic overlay) i have added .How can i get the camera output with mask .

Thanks in advance.

Community
  • 1
  • 1
Jack
  • 1,825
  • 3
  • 26
  • 43

1 Answers1

1

The simple answer is: You can't.

Why? Android camera output frames in NV21 ByteBuffer. And you must generate your masks based on the landmarks points in a separated Bitmap, then join them. Sorry but, that's how the Android Camera API work. Nothing can be done. You must do it manually.

Also, I wouldn't get the camera preview then convert it to YuvImage then to Bitmap. That process consumes a lot of resources and makes preview very very slow. Instead I would use this method which will be a lot faster and rotates your preview internally so you don't loose time doing it:

outputFrame = new Frame.Builder().setImageData(mPendingFrameData, mPreviewSize.getWidth(), mPreviewSize.getHeight(), ImageFormat.NV21)
              .setId(mPendingFrameId)
              .setTimestampMillis(mPendingTimeMillis)
              .setRotation(mRotation)
              .build();
mDetector.receiveFrame(outputFrame);

All the code can be found in CameraSource.java

Ezequiel Adrian
  • 726
  • 1
  • 11
  • 29