1

I wanted to learn more about Camera2 API. So, I downloaded the official Camera2 sample from here https://github.com/googlesamples/android-Camera2Basic and played with it a bit until I understood how it works. So, now I asked myself if I could process the frames in real-time which would be a cool feature. I assume that this can be done within the onImageAvailable() method of the ImageReader class.

Therefore, I implemented the following lines

private final ImageReader.OnImageAvailableListener mOnImageAvailableListener
            = new ImageReader.OnImageAvailableListener() {

        @Override
        public void onImageAvailable(ImageReader reader) {

            Image image = null;
            int width, heigth;
            int[] pixels;
            try {
                image = reader.acquireLatestImage();
                if (image != null) {

                    ByteBuffer buffer = image.getPlanes()[0].getBuffer();
                    byte[] bytes = new byte[buffer.capacity()];
                    buffer.get(bytes, 0, bytes.length);

                    Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);;

                    width = bitmap.getWidth();
                    heigth = bitmap.getHeight();
                    pixels = new int[width*heigth];

                    bitmap.getPixels(pixels, 0, width, 0, 0, width, heigth);
                    FilterLib.floydsteinberg(pixels, width, heigth);
                    bitmap.setPixels(pixels, 0, width, 0, 0, width, heigth);


                    image.close();
                }
            } catch (Exception e) {
                Log.w("ImageReader", e.getMessage());
            }

            mBackgroundHandler.post(new ImageSaver(reader.acquireNextImage(), mFile));
        }

    };

So, what I did was to extract the Bitmap from the Image. The resulting Bitmap is then passed to the floydsteinberg() method which is a native implementation of the Floyd-Steinberg algorithm. What the native implementation does is to get the pixels, process them based on the mentioned algorithm and return the processed pixels back which in turn will be now the new pixels of the Bitmap.

But now: How can I see that new (processed) Bitmap on screen ? More specifally, is there a way to convert that Bitmap to an Image ?

I found some SO threads like camera2 output to Bitmap or Android Camera2 API Showing Processed Preview Image but I could not understand too much from that.

ebeninki
  • 909
  • 1
  • 12
  • 34

0 Answers0