32

In android, I get an Image object from here https://inducesmile.com/android/android-camera2-api-example-tutorial/ this camera tutorial. But I want to now loop through the pixel values, does anyone know how I can do that? Do I need to convert it to something else and how can I do that?

Thanks

omega
  • 40,311
  • 81
  • 251
  • 474

9 Answers9

40

If you want to loop all throughout the pixel then you need to convert it first to Bitmap object. Now since what I see in the source code that it returns an Image, you can directly convert the bytes to bitmap.

    Image image = reader.acquireLatestImage();
    ByteBuffer buffer = image.getPlanes()[0].getBuffer();
    byte[] bytes = new byte[buffer.capacity()];
    buffer.get(bytes);
    Bitmap bitmapImage = BitmapFactory.decodeByteArray(bytes, 0, bytes.length, null);

Then once you get the bitmap object, you can now iterate through all of the pixels.

Rod_Algonquin
  • 26,074
  • 6
  • 52
  • 63
  • What's the purpose of the `buffer.get(bytes)` line if the buffer is not used later? – mercury0114 Aug 28 '17 at 14:57
  • 3
    It loads the image in memory from the buffer to the bytes array. So the `get` is to retrive the bytes from the image buffer. – Rod_Algonquin Aug 28 '17 at 15:07
  • Is there any other way to do this(access the pixels from Image)? Loading a Bitmap requires a lot of memory. – Mugur Oct 05 '17 at 10:16
  • 6
    Why do you only get the first plane? Isn't there relevant data in the other planes? – hellowill89 Jan 11 '19 at 02:40
  • 1
    @hellowill89 The plane directly reference to the format of the image, e.g. JPEG only contains 1 plane, so only the first plane is retrieved and placed from the buffer to target memory. https://developer.android.com/reference/android/media/Image.html#getFormat() – Rod_Algonquin Jan 11 '19 at 15:47
  • 4
    I see, so this only works with JPEG Images. Cameras often output YUV Images though which make use of 3 (?) planes. In that case, would the planes be appended sequentially? – hellowill89 Jan 11 '19 at 20:43
  • Each plane correspond to the single color image, depending on the type of the plane you are extracting, in the documentation you will see that each format will results to different plane with different color schemes. So no, you do not append them but lay them together. – Rod_Algonquin Jan 11 '19 at 22:21
  • 6
    `IllegalStateException: BitmapFactory.decodeByte…tes, 0, bytes.size, null) must not be null` – Someone Somewhere Jun 30 '20 at 02:06
  • 1
    @Rod_Algonquin How do you lay them together..? – Rony Tesler Sep 29 '20 at 22:27
  • 1
    Face the same problem as bitmap is null when convert a DEPTH16 image ```Attempt to invoke virtual method 'boolean android.graphics.Bitmap.compress(android.graphics.Bitmap$CompressFormat, int, java.io.OutputStream)' on a null object reference``` – zhangxaochen Nov 19 '20 at 09:14
4

YuvToRgbConverter is useful for conversion from Image to Bitmap.

https://github.com/android/camera-samples/blob/master/Camera2Basic/utils/src/main/java/com/example/android/camera/utils/YuvToRgbConverter.kt

Usage sample.

   val bmp = Bitmap.createBitmap(image.width, image.height, Bitmap.Config.ARGB_8888)
   yuvToRgbConverter.yuvToRgb(image, bmp)
kenma
  • 41
  • 3
2

Actually you have two questions in one 1) How do you loop throw android.media.Image pixels 2) How do you convert android.media.image to Bitmap

The 1-st is easy. Note that the Image object that you get from the camera, it's just a YUV frame, where Y, and U+V components are in different planes. In many Image Processing cases you need only the Y plane, that means the gray part of the image. To get it I suggest code like this:

    Image.Plane[] planes = image.getPlanes();
    int yRowStride = planes[0].getRowStride();
    byte[] yImage = new byte[yRowStride];
    planes[0].getBuffer().get(yImage);

The yImage byte buffer is actually the gray pixels of the frame. In same manner you can get the U+V parts to. Note that they can be U first, and V after, or V and after it U, and maybe interlived (that is the common case case with Camera2 API). So you get UVUV....

For debug purposes, I often write the frame to a file, and trying to open it with Vooya app (Linux) to check the format.

The 2-th question is a little bit more complex. To get a Bitmap object I found some code example from TensorFlow project here. The most interesting functions for you is "convertImageToBitmap" that will return you with RGB values.

To convert them to a real Bitmap do the next:

  Bitmap rgbFrameBitmap;
  int[] cachedRgbBytes;
  cachedRgbBytes = ImageUtils.convertImageToBitmap(image, cachedRgbBytes, cachedYuvBytes);
  rgbFrameBitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
  rgbFrameBitmap.setPixels(cachedRgbBytes,0,image.getWidth(), 0, 0,image.getWidth(), image.getHeight());

Note: There is more options of converting YUV to RGB frames, so if you need the pixels value, maybe Bitmap is not the best choice, as it may consume more memory than you need, to just get the RGB values

Arkady
  • 624
  • 1
  • 8
  • 17
1

Java Conversion Method

ImageAnalysis imageAnalysis = new ImageAnalysis.Builder()
                .setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
                .setOutputImageFormat(ImageAnalysis.OUTPUT_IMAGE_FORMAT_RGBA_8888)
                .build();

imageAnalysis.setAnalyzer(ContextCompat.getMainExecutor(this), new ImageAnalysis.Analyzer() {
    @Override
    public void analyze(@NonNull ImageProxy image) {
         // call toBitmap function
         Bitmap bitmap = toBitmap(image);
         image.close();
    }
});
private Bitmap bitmapBuffer;
private Bitmap toBitmap(@NonNull ImageProxy image) {
   if(bitmapBuffer == null){
       bitmapBuffer = Bitmap.createBitmap(image.getWidth(),image.getHeight(),Bitmap.Config.ARGB_8888);
   }
   bitmapBuffer.copyPixelsFromBuffer(image.getPlanes()[0].getBuffer());
   return bitmapBuffer;
}
  • Works fine even in Kotlin like : imageAnalyzer = ImageAnalysis.Builder() .... .setOutputImageFormat(ImageAnalysis.OUTPUT_IMAGE_FORMAT_RGBA_8888) then und analyze: myBitmap.copyPixelsFromBuffer(image.planes[0].buffer) – Gabor Szigeti Oct 20 '22 at 13:53
  • Why just the 0th plane in the `image.getPlanes()[0].getBuffer()`? What happens with the other planes? – Csaba Toth Jan 23 '23 at 14:15
  • fail. this crashes on copyPixelsFromBuffer – dcarl661 Feb 02 '23 at 17:49
  • @CsabaToth The input data is already in RGBA_8888 format, here only need to copy the memory. When it is yuv format, there will be other planes. – cheungxiongwei Feb 06 '23 at 09:57
  • @dcarl661 The above code snippet, I have used it normally on android. Is your function input data in other formats? – cheungxiongwei Feb 06 '23 at 10:02
  • @cheungxiongwei I added my usage on Android from the analyzer imageProxy to the conversion to BitMap using a static function I adapted from BitMapUtils. – dcarl661 Feb 20 '23 at 22:47
0

https://docs.oracle.com/javase/1.5.0/docs/api/java/nio/ByteBuffer.html#get%28byte[]%29

According to the java docs: The buffer.get method transfers bytes from this buffer into the given destination array. An invocation of this method of the form src.get(a) behaves in exactly the same way as the invocation

 src.get(a, 0, a.length) 
0

I assume you have YUV (YUV (YUV_420_888) Image provided by Camera. Using this interesting How to use YUV (YUV_420_888) Image in Android tutorial I can propose following solution to convert Image to Bitmap. Use this to convert YUV Image to Bitmap:

    private Bitmap yuv420ToBitmap(Image image, Context context) {
        RenderScript rs = RenderScript.create(SpeedMeasurementActivity.this);
        ScriptIntrinsicYuvToRGB script = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs));

        // Refer the logic in a section below on how to convert a YUV_420_888 image
        // to single channel flat 1D array. For sake of this example I'll abstract it
        // as a method.
        byte[] yuvByteArray = image2byteArray(image);

        Type.Builder yuvType = new Type.Builder(rs, Element.U8(rs)).setX(yuvByteArray.length);
        Allocation in = Allocation.createTyped(rs, yuvType.create(), Allocation.USAGE_SCRIPT);

        Type.Builder rgbaType = new Type.Builder(rs, Element.RGBA_8888(rs))
                .setX(image.getWidth())
                .setY(image.getHeight());
        Allocation out = Allocation.createTyped(rs, rgbaType.create(), Allocation.USAGE_SCRIPT);

        // The allocations above "should" be cached if you are going to perform
        // repeated conversion of YUV_420_888 to Bitmap.
        in.copyFrom(yuvByteArray);
        script.setInput(in);
        script.forEach(out);

        Bitmap bitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
        out.copyTo(bitmap);
        return bitmap;
    }

and a supportive function to convert 3 Planes YUV image to 1 dimesional byte array:

    private byte[] image2byteArray(Image image) {
        if (image.getFormat() != ImageFormat.YUV_420_888) {
            throw new IllegalArgumentException("Invalid image format");
        }

        int width = image.getWidth();
        int height = image.getHeight();

        Image.Plane yPlane = image.getPlanes()[0];
        Image.Plane uPlane = image.getPlanes()[1];
        Image.Plane vPlane = image.getPlanes()[2];

        ByteBuffer yBuffer = yPlane.getBuffer();
        ByteBuffer uBuffer = uPlane.getBuffer();
        ByteBuffer vBuffer = vPlane.getBuffer();

        // Full size Y channel and quarter size U+V channels.
        int numPixels = (int) (width * height * 1.5f);
        byte[] nv21 = new byte[numPixels];
        int index = 0;

        // Copy Y channel.
        int yRowStride = yPlane.getRowStride();
        int yPixelStride = yPlane.getPixelStride();
        for(int y = 0; y < height; ++y) {
            for (int x = 0; x < width; ++x) {
                nv21[index++] = yBuffer.get(y * yRowStride + x * yPixelStride);
            }
        }

        // Copy VU data; NV21 format is expected to have YYYYVU packaging.
        // The U/V planes are guaranteed to have the same row stride and pixel stride.
        int uvRowStride = uPlane.getRowStride();
        int uvPixelStride = uPlane.getPixelStride();
        int uvWidth = width / 2;
        int uvHeight = height / 2;

        for(int y = 0; y < uvHeight; ++y) {
            for (int x = 0; x < uvWidth; ++x) {
                int bufferIndex = (y * uvRowStride) + (x * uvPixelStride);
                // V channel.
                nv21[index++] = vBuffer.get(bufferIndex);
                // U channel.
                nv21[index++] = uBuffer.get(bufferIndex);
            }
        }
        return nv21;
    }
basileus
  • 295
  • 3
  • 9
0

start with the imageProxy from the analyizer

@Override
        public void analyze(@NonNull ImageProxy imageProxy)
        {
            Image mediaImage     = imageProxy.getImage();
            if (mediaImage      != null)
            {
                toBitmap(mediaImage);  
            }
            imageProxy.close();
        }

Then convert to a bitmap

private Bitmap toBitmap(Image image)
{
    if (image.getFormat() != ImageFormat.YUV_420_888)
    {
        throw new IllegalArgumentException("Invalid image format");
    }
    byte[] nv21b      =  yuv420ThreePlanesToNV21BA(image.getPlanes(), image.getWidth(), image.getHeight());
    YuvImage yuvImage = new YuvImage(nv21b, ImageFormat.NV21, image.getWidth(), image.getHeight(), null);

    ByteArrayOutputStream baos = new ByteArrayOutputStream();
    yuvImage.compressToJpeg      (new Rect(0, 0,
                                 yuvImage.getWidth(),
                                 yuvImage.getHeight()),
                                 mQuality, baos);
    mFrameBuffer               = baos;

    //byte[] imageBytes = baos.toByteArray();
    //Bitmap bm         = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);

    return null;
}

Here's the static function that worked for me

public static byte [] yuv420ThreePlanesToNV21BA(Plane[] yuv420888planes, int width, int height)
{
    int imageSize = width * height;
    byte[] out = new byte[imageSize + 2 * (imageSize / 4)];

    if (areUVPlanesNV21(yuv420888planes, width, height)) {
        // Copy the Y values.
        yuv420888planes[0].getBuffer().get(out, 0, imageSize);

        ByteBuffer uBuffer = yuv420888planes[1].getBuffer();
        ByteBuffer vBuffer = yuv420888planes[2].getBuffer();
        // Get the first V value from the V buffer, since the U buffer does not contain it.
        vBuffer.get(out, imageSize, 1);
        // Copy the first U value and the remaining VU values from the U buffer.
        uBuffer.get(out, imageSize + 1, 2 * imageSize / 4 - 1);
    }
    else
    {
        // Fallback to copying the UV values one by one, which is slower but also works.
        // Unpack Y.
        unpackPlane(yuv420888planes[0], width, height, out, 0, 1);
        // Unpack U.
        unpackPlane(yuv420888planes[1], width, height, out, imageSize + 1, 2);
        // Unpack V.
        unpackPlane(yuv420888planes[2], width, height, out, imageSize, 2);
    }
    return out;
}
dcarl661
  • 177
  • 3
  • 9
-3

bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.image);

sapeg
  • 109
  • 7
-5

1-Store the path to the image file as a string variable. To decode the content of an image file, you need the file path stored within your code as a string. Use the following syntax as a guide:

String picPath = "/mnt/sdcard/Pictures/mypic.jpg";

2-Create a Bitmap Object And Use BitmapFactory:

Bitmap picBitmap;
Bitmap picBitmap = BitmapFactory.decodeFile(picPath);
Fazal Jarral
  • 160
  • 1
  • 14