27

I'm trying to create an Android application that will process camera frames in real time. To start off with, I just want to display a grayscale version of what the camera sees. I've managed to extract the appropriate values from the byte array in the onPreviewFrame method. Below is just a snippet of my code:

byte[] pic;
int pic_size;
Bitmap picframe;
public void onPreviewFrame(byte[] frame, Camera c)
{
    pic_size = mCamera.getParameters().getPreviewSize().height * mCamera.getParameters().getPreviewSize().width;
    pic = new byte[pic_size];
    for(int i = 0; i < pic_size; i++)
    {
        pic[i] = frame[i];
    }
    picframe = BitmapFactory.decodeByteArray(pic, 0, pic_size);
}

The first [width*height] values of the byte[] frame array are the luminance (greyscale) values. Once I've extracted them, how do I display them on the screen as an image? Its not a 2D array as well, so how would I specify the width and height?

TheLethalCoder
  • 6,668
  • 6
  • 34
  • 69
NavMan
  • 345
  • 1
  • 5
  • 10

3 Answers3

10

You can get extensive guidance from the OpenCV4Android SDK. Look into their available examples, specifically Tutorial 1 Basic. 0 Android Camera

But, as it was in my case, for intensive image processing, this will get slower than acceptable for a real-time image processing application. A good replacement for their onPreviewFrame 's byte array conversion to YUVImage:

YuvImage yuvImage = new YuvImage(frame, ImageFormat.NV21, width, height, null);

Create a rectangle the same size as the image.

Create a ByteArrayOutputStream and pass this, the rectangle and the compression value to compressToJpeg():

ByteArrayOutputStream baos = new ByteArrayOutputStream(); yuvimage.compressToJpeg(imageSizeRectangle, 100, baos);

byte [] imageData = baos.toByteArray();

Bitmap previewBitmap = BitmapFactory.decodeByteArray(imageData , 0, imageData .length);

Rendering these previewFrames on a surface and the best practices involved is a new dimension. =)

david
  • 1,311
  • 12
  • 32
Heartache
  • 217
  • 4
  • 14
  • 1
    Does this happen to convert the YUV image to a Bitmap with an intermediary JPEG? Hardly a feasible real-time solution – Rekin Jan 03 '14 at 19:58
  • I would strongly advise against this. OpenCV on android will add complexity to the project and will also require the native libraries which are around 10mb in size. Android does have all the facilities to accomplish what the OP requires. OpenCV is a great library, but overkill for this situation. – protectedmember Sep 29 '15 at 18:37
3

This very old post has caught my attention now.

The API available in '11 was much more limited. Today one can use SurfaceTexture (see example) to preview camera stream after (some) manipulations.

Alex Cohn
  • 56,089
  • 9
  • 113
  • 307
-2

This is not an easy task to achieve, with the current Android tools/API available. In general, realtime image-processing is better done at the NDK level. To just show the black and white, you can still do it in java. The byte array containing the frame data is in YUV format, where the Y-Plane comes first. So, if you get the just the Y-plane alone (first width x height bytes), it already gives you the black and white.

I did achieve this through extensive work and trials. You can view the app at google: https://play.google.com/store/apps/details?id=com.nm.camerafx

Nazar Merza
  • 3,365
  • 1
  • 19
  • 19