14

When doing 2D game development in Java, most tutorials create a bufferstrategy to render. This makes perfect sense. However, where people seem to skew off is the method of drawing the actual graphics to the buffer.

Some of the tutorials create a buffered image, then create an integer array to represent the individual pixel colors.

private BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
private int[] pixels = ((DataBufferInt) image.getRaster().getDataBuffer()).getData();

Graphics g = bs.getDrawGraphics();
g.setColor(new Color(0x556B2F));
g.fillRect(0, 0, getWidth(), getHeight());
g.drawImage(image, 0, 0, getWidth(), getHeight(), null);

However some other tutorials don't create the buffered image, drawing the pixels to an int array, and instead use the Graphics component of the BufferStrategy to draw their images directly to the buffer.

Graphics g = bs.getDrawGraphics();
g.setColor(new Color(0x556B2F));
g.fillRect(0, 0, getWidth(), getHeight());

g.drawImage(testImage.image, x*128, y*128, 128, 128, null);

I was just wondering, why create the entire int array, then draw it. This requires a lot more work in implementing rectangles, stretching, transparency, etc. The graphics component of the buffer strategy already has methods which can easily be called. Is there some huge performance boost of using the int array?

I've looked this up for hours, and all the sites I've seen just explain what they're doing, and not why they chose to do it that way.

Kabistarz
  • 141
  • 4
  • 2
    I'm not 100% sure, but I would appear the some people believe its faster to update an int/pixel array then to use the Graphics API. This may have being true, it's also likely that people didn't understand how to create compatible graphics objects. Using a buffered strategy should be providing almost direct access to the hardware layer (where available), so I don't really see why you'd need to see why you would need to use an int array, to that's just me, and I'm lazy like that ;) – MadProgrammer Sep 22 '13 at 20:02
  • Yea, I was testing with both methods. With an int array, I could change about 27648000 pixels before my fps started decreasing below 120. with the graphics object, I could render a transparent image, onto a scaled up rectangle a few thousand times which was just about equivalent to the int array. Using the graphics object seemed more useful overall. – Kabistarz Sep 23 '13 at 20:21
  • I did a relatively simple example some time ago, using nothing more then a JPanel with some custom painting for the main object. I then got 4500 of these objects all moving in different directions, including rotation of the main object so it pointed in the direction of its movement. Not sure what the frame rate really was, but I had it refreshing around 25fps and it worked surprisingly well. I think there's being a lot of optimisation in the rendering pipeline, with the ability to use either DirectX or OpenGL where available, but you'll need to experiment ;) – MadProgrammer Sep 23 '13 at 20:32
  • You may find using simple rendering techniques, like using a static image for unchanged objects and layering the output will make a difference, but sure you're using compatible images, so that they will paint faster onto the device...for [example](http://www.java2s.com/Code/Java/2D-Graphics-GUI/Createbufferedimagesthatarecompatiblewiththescreen.htm) – MadProgrammer Sep 23 '13 at 20:36
  • @Kabistarz there are a lot of articles (mainly outdated) describing different optimization techniques. Some of them will recommend you to use low level JVM instructions in order to get better performance. Would you follow? :) Anyway, I've described some historical aspects in order to answer you question. If you really interested to do something non-trivial give jogl (or some other alternative) a try. – Renat Gilmanov Jul 05 '15 at 20:45
  • @Kabistarz As far as i know, the graphics API can take advantage of graphic card. I've never coded int[][] for do that. I readed "killer game programming in java" and I loved – David Pérez Cabrera Jul 06 '15 at 17:37

2 Answers2

5

First of all, there are lots of historical aspects. Early API was very basic, so the only way to do anything non-trivial was to implement all required primitives.

Raw data access is a bit old-fashioned and we can try to do some "archeology" to find the reason such approach was used. I think there are two main reasons:

1. Filter effects

Let's not forget filter effects (various kinds of blurs, etc) are simple, very important for any game developer and widely used.

enter image description here

The simples way to implement such an effect with Java 1 was to use int array and filter defined as a matrix. Herbert Schildt, for example, used to have lots of such demos:

public class Blur {

    public void convolve() {
        for (int y = 1; y < height - 1; y++) {
            for (int x = 1; x < width - 1; x++) {
                int rs = 0;
                int gs = 0;
                int bs = 0;
                for (int k = -1; k <= 1; k++) {
                    for (int j = -1; j <= 1; j++) {
                        int rgb = imgpixels[(y + k) * width + x + j];
                        int r = (rgb >> 16) & 0xff;
                        int g = (rgb >> 8) & 0xff;
                        int b = rgb & 0xff;
                        rs += r;
                        gs += g;
                        bs += b;
                    }
                }
                rs /= 9;
                gs /= 9;
                bs /= 9;
                newimgpixels[y * width + x] = (0xff000000
                        | rs << 16 | gs << 8 | bs);
            }
        }
    }
} 

Naturally, you can implement that using getRGB, but raw data access is way more effective. Later, Graphics2D provided better abstraction layer:

public interface BufferedImageOp

This interface describes single-input/single-output operations performed on BufferedImage objects. It is implemented by AffineTransformOp, ConvolveOp, ColorConvertOp, RescaleOp, and LookupOp. These objects can be passed into a BufferedImageFilter to operate on a BufferedImage in the ImageProducer-ImageFilter-ImageConsumer paradigm.

2. Double buffering

Another problem was related to flickering and really slow drawing. Double buffering eliminates ugly flickering and all of a sudden it provides an easy way to do filtering effects, because you have buffer already.

enter image description here

Something like a final conclusion :)

I would say the situation you've described is pretty common for any evolving technology. There are two ways to achieve same goals:

  • use legacy approach, code more, etc
  • rely on new abstraction layers, provided techniques, etc

There are also some useful extensions to simplify your life even more, so no need to use int[] :)

Renat Gilmanov
  • 17,735
  • 5
  • 39
  • 56
4

Lets be clear about one thing: both snippets of code do exactly the same thing - draw an Image. The snippets are rather incomplete however - the second snippet does not show what 'testImage.image' actually is or how it is created. But they both ultimately call Graphics.drawImage() and all variants of drawImage() in either Graphics or Graphics2D draw an Image, plain and simple. In the second case we simply don't know if it is a BufferedImage, a VolatileImage or even a Toolkit Image.

So there is no difference in drawing actually illustrated here!

There is but one difference between the two snippets - the first one also obtains a direct reference to the integer array that is ultimately internally backing the Image instance. This gives direct access to the pixel data rather than having to go through the (Buffered)Image API of using for example the relatively slow getRGB() and setRGB() methods. The reason why to do that can't be made specific in the context is in this question, the array is obtained but never ever used in the snippet. So in order to give the following explanation any reason to exist, we must make the assumption that someone wants to directly read or edit the pixels of the image, quite possibly for optimization reasons given the "slowness" of the (Buffered)Image API to manipulate data.

And those optimization reasons may be a premature optimization that can backfire on you.


Firs of all, this code only works because the type of the image is INT_RGB which will give the image an IntDataBuffer. If it has been another type of image, ex 3BYTE_BGR, this code will fail with a ClassCastException since the backing data buffer won't be an IntDataBuffer. This may not be much of a problem when you only manually create images and you enforce a specific type, but images tend to be loaded from files created by external tools.

Secondly, there is another bigger downside to directly accessing the pixel buffer: when you do that, Java2D will refuse acceleration of that image since it cannot know when you will be making changes to it outside of its control. Just for clarity: acceleration is the process of keeping an unaltered image in video memory rather than copying it from system memory each time it is drawn. This is potentially a huge performance improvement (or loss if you break it) depending on how many images you work with.

How can I create a hardware-accelerated image with Java2D?

(As that related question shows you: you should use GraphicsConfiguration.createCompatibleImage() to construct BufferedImage instances).

So in essence: try to use the Java2D API for everything, don't access buffers directly. This off-site resource gives a good idea just what features the API has to support you in that without having to go low level:

http://www.pushing-pixels.org/2008/06/06/effective-java2d.html

Community
  • 1
  • 1
Gimby
  • 5,095
  • 2
  • 35
  • 47