1

I have an image with a lot of anti-aliased lines in it and trying to remove pixels that fall below a certain alpha channel threshold (and anything above the threshold gets converted to full 255 alpha). I've got this coded up and working, its just not as fast as I would like when running it on large images. Does anyone have an alternative method they could suggest?

//This will convert all pixels with > minAlpha to 255
public static void flattenImage(BufferedImage inSrcImg, int minAlpha)
{
    //loop through all the pixels in the image
    for (int y = 0; y < inSrcImg.getHeight(); y++)
    {
        for (int x = 0; x < inSrcImg.getWidth(); x++)
        {
            //get the current pixel (with alpha channel)
            Color c = new Color(inSrcImg.getRGB(x,y), true);

            //if the alpha value is above the threshold, convert it to full 255
            if(c.getAlpha() >= minAlpha)
            {
                inSrcImg.setRGB(x,y, new Color(c.getRed(), c.getGreen(), c.getBlue(), 255).getRGB());
            }
            //otherwise set it to 0
            else
            {
                inSrcImg.setRGB(x,y, new Color(0,0,0,0).getRGB()); //white (transparent)
            }
        }
    }
}

per @BenoitCoudour 's comments I've modified the code accordingly, but it appears to be affecting the resulting RGB values of pixels, any idea what I might be doing wrong?

public static void flattenImage(BufferedImage src, int minAlpha)
{
    int w = src.getWidth();
    int h = src.getHeight();

    int[] rgbArray = src.getRGB(0, 0, w, h, null, 0, w);

    for (int i=0; i<w*h; i++)
    {
        int a = (rgbArray[i] >> 24) & 0xff;
        int r = (rgbArray[i] >> 16) & 0xff;
        int b = (rgbArray[i] >> 8) & 0xff;
        int g = rgbArray[i] & 0xff;

        if(a >= minAlpha) { rgbArray[i] = (255<<24) | (r<<16) | (g<<8) | b; }
        else { rgbArray[i] = (0<<24) | (r<<16) | (g<<8) | b; }
    }

    src.setRGB(0, 0, w, h, rgbArray, 0, w);
}
cdubbs
  • 97
  • 12
  • 1
    You are reading ARBG, but writing ARGB. You should read ARGB too. In other words, you are reading the green value into `b` and the blue value into `g`. – erickson Feb 16 '16 at 21:12
  • Unless your `BufferedImage` is of type `TYPE_INT_ARGB` the `getRGB/setRGB(...)` methods are unnecessary slow. It's faster to access the backing data array directly. – Harald K Feb 17 '16 at 13:57

2 Answers2

1

What may slow you down is the instantiation of a Color object for every pixel. Please see this answer to iterate over pixels in a BufferedImage and access the alpha channel : https://stackoverflow.com/a/6176783/3721907

I'll just paste the code below

public Image alpha2gray(BufferedImage src) {

    if (src.getType() != BufferedImage.TYPE_INT_ARGB)
        throw new RuntimeException("Wrong image type.");

    int w = src.getWidth();
    int h = src.getHeight();

    int[] srcBuffer = src.getData().getPixels(0, 0, w, h, null);
    int[] dstBuffer = new int[w * h];

    for (int i=0; i<w*h; i++) {
        int a = (srcBuffer[i] >> 24) & 0xff;
        dstBuffer[i] = a | a << 8 | a << 16;
    }

    return Toolkit.getDefaultToolkit().createImage(new MemoryImageSource(w, h, pix, 0, w));
}

This is very close to what you want to achieve.

Community
  • 1
  • 1
  • This call is coming back ambiguous, it's expecting a int[] or float[] in the last term. Not quite sure what I should be using there. int[] srcBuffer = src.getData().getPixels(0, 0, w, h, null); – cdubbs Feb 16 '16 at 14:45
  • Yeah the code doesn't compile... Try this int[] srcBuffer = src.getData().getPixels(0, 0, w, h, new int[w*h]); – Benoit Coudour Feb 16 '16 at 17:10
0

You have a theoretical complexity of O(n) which you optimize by performing byte manipulation.

You can go further and use threads (you have an embarrassing parallel problem), but since most of user machines have at most 8 physical threads it will not get you too far. You could add another level of optimization on top of this by manipulating parts of the image one at the time, adapted to the memory buffers and different cache levels in your system.

Since I already mentioned you have an embarrassing parallel problem, the best solution is to perform GPU programming.

You can follow this tutorial on simple image processing with cuda and change the code of the filter to something like this

void blur(unsigned char* input_image, unsigned char* output_image, int width, int height) {

const unsigned int offset = blockIdx.x*blockDim.x + threadIdx.x;
const int currentoffset = (offset)*4;
if(offset < width*height) {
           if (input_image[currentoffset+3]>= threshold )
                output_red = input_image[currentoffset]; 
                output_green = input_image[currentoffset+1];
                output_blue = input_image[currentoffset+2];
                output_alpha = 255;
            }else{
                output_red = 0; 
                output_green = 0;
                output_blue = 0;
                output_alpha = 0;
            }
        }
    }
    output_image[currentoffset*3] = output_red;
    output_image[currentoffset*3+1] = output_green;
    output_image[currentoffset*3+2] = output_blue;
    output_image[currentoffset*3+3] = output_alpha
    }

}

If you are set on using Java you have here a great answer on how to get started on using java with nvidia gpu

Community
  • 1
  • 1
Radu Ionescu
  • 3,462
  • 5
  • 24
  • 43