0

Here is a small piece of code from my android app using Opencv (for Google Glass). I am trying to turn an image (at location picturePath) from colour to grayscale and then overwrite the original colour image. As it stands, this code saves an image in memory that is half grayscale as it should be, and half completely black:

private void rGBProcessing (final String picturePath, Mat image) {
    //BitmapFactory Creates Bitmap objects from various sources,
    //including files, streams, and byte-arrays
    Bitmap myBitmapPic = BitmapFactory.decodeFile(picturePath);
    image = new Mat(myBitmapPic.getWidth(), myBitmapPic.getHeight(), CvType.CV_8UC4);
    Mat imageTwo = new Mat(myBitmapPic.getWidth(), myBitmapPic.getHeight(), CvType.CV_8UC1);
    Utils.bitmapToMat(myBitmapPic, image);
    Imgproc.cvtColor(image, imageTwo, Imgproc.COLOR_RGBA2GRAY);
    //Highgui.imwrite(picturePath, imageTwo);
    Utils.matToBitmap(imageTwo, myBitmapPic);

    FileOutputStream out = null;
    try {
        out = new FileOutputStream(picturePath);
        myBitmapPic.compress(Bitmap.CompressFormat.PNG, 100, out); // bmp is your Bitmap instance
        // PNG is a lossless format, the compression factor (100) is ignored
    } catch (Exception e) {
        e.printStackTrace();
    } finally {
        try {
            if (out != null) {
                out.close();
            }
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

Can anybody please explain to me why the image produced is completely black on one half and advise me on a correction. It seems to me that since the process of image manipulation has been partially completed, that perhaps there is an issue with one piece of the code not being completed before the next code starts? Any help is appreciated. Cheers!

Update: Here is what I'm seeing: enter image description here

The picture on Google Glass was half black but this upload just cuts off the bottom half. Then a little while later, even after the app is no longer on the screen (I don't know how to stop debugging though so it could still be working in the background. I then get the full grayscale image. Can someone explain what is going on and give me a potential fix please?

Alex K
  • 8,269
  • 9
  • 39
  • 57
  • You convert your image to grayscale and not to B&W. What do you mean half black and white? Could you attach a sample input/output image? – Kornel Feb 22 '15 at 08:17
  • I've editted the question to include this information and show a pic. I mean it stores half of the image in grayscale but the bottom half of the image is not present. Then I unplugged glass from the debugging computer, plugged it back in and on the memory was the full grayscale image. Why is this happening and how can I remedy it? – MichaelAndroidNewbie Feb 22 '15 at 12:56
  • can you try to use imread() / imwrite() instead of the bitmap conversions ? – berak Feb 22 '15 at 12:58
  • imread() works fine to replace bitmapToMap, but imread() reads the picture to memory as a Mat. Which is fine, however I would like to be able to view the image and have been using windows photo viewer on the debugging computer to do so and it doesn't support Mat file format. – MichaelAndroidNewbie Feb 22 '15 at 13:56
  • So I do eventually get a full grayscale image saved to memory. For some reason this only happens after I remove Google Glass from the computer and plug it back in :S. This is a real problem since my app will require access to the full grayscale image whilst it is running. Any help appreciated. Cheers! – MichaelAndroidNewbie Feb 22 '15 at 17:16
  • Given that it seems the problem is now with reading the bitmap to memory rather than any of the other code, I have posted a new question [http://stackoverflow.com/questions/28661197/why-does-saving-a-bitmap-take-so-long] – MichaelAndroidNewbie Feb 22 '15 at 17:58

1 Answers1

0

The alpha channel of your image may be linked into the RGB values. There are two different ways to think about transparency:

  1. Conventional Alpha Blending, which defines transparency as:

    • RGB specifies the color of the object,
    • Alpha specifies how solid it is.

    In math: blend(source, dest) = (source.rgb * source.a) + (dest.rgb * (1 - source.a))
    In this world, RGB and alpha are independent. You can change one without affecting the other. Even when an object is fully transparent it still has the same RGB as if it was opaque.

  2. Premultiplied Alpha Blending

    • RGB specifies how much color the object contributes to the scene
    • Alpha specifies how much it obscures whatever is behind it

    In math: blend(source, dest) = source.rgb + (dest.rgb * (1 - source.a))
    In this world, RGB and alpha are linked. To make an object transparent you must reduce both its RGB (to contribute less color) and also its alpha (to obscure less of whatever is behind it). Fully transparent objects no longer have any RGB color, so there is only one value that represents 100% transparency => RGB and alpha all zero as in your case.

OpenCV bitmapToMat() provides an opportunity to convert Premultiplied Alpha Blending to Conventional Alpha Blending. Try to use:
Utils.bitmapToMat(myBitmapPic, image, true);


Just a remark: bitmapToMat() reallocates the output Mat object if needed, so it may be empty. so you shouldn't care about allocation if image.

Kornel
  • 5,264
  • 2
  • 21
  • 28