1

I have a 2d Android game that is currently causing certain devices to run out of memory. I have a number of PNGs (about 10 MBs in total) that I use in the game at various times. At some points in the game, the majority of these need to be displayed at the same time.

I have tried just decreasing the resolution of my images but I am not happy with the quality after doing so.

I have read a number of posts about how to solve these memory issues and as far as I can see, texture compression is the best approach (feel free to correct me if I am wrong). I have also seen this post that covers how to determine which texture compression formats are supported on a device and I understand this part of things: Android OpenGL Texture Compression

My question is two-fold:

  1. Most of my textures require alphas. I know that by default ETC1 does not support alpha, but I also know that when using ETC1 you can create a separate alpha compression as described here: http://sbcgamesdev.blogspot.com/2013/06/etc1-textures-loading-and-alpha.html. Shown in that link is how to apply the alphas using the NDK. I am battling to understand how to do this using the standard OpenGL ES Java wrappers though. Below is how I currently handle textures (i.e. no texture compression). How would I convert this to handle compressed textures where I need to load the alphas separately?

    GLGraphics glGraphics;
    FileIO fileIO;
    String fileName;
    int textureId;
    int minFilter;
    int magFilter;
    
    public int width;
    public int height;
    
    private boolean loaded = false;
    
    public Texture(GLGame glGame, String fileName) {
        this.glGraphics = glGame.getGLGraphics();
        this.fileIO = glGame.getFileIO();
        this.fileName = fileName;
        load();
    }
    
    public void load() {
        GL10 gl = glGraphics.getGL();
        int[] textureIds = new int[1];
        gl.glGenTextures(1, textureIds, 0);
        textureId = textureIds[0];
    
        InputStream inputStream = null;
    
        try {
            inputStream = fileIO.readAsset(fileName);
            Bitmap bitmap = BitmapFactory.decodeStream(inputStream);            
            gl.glBindTexture(GL10.GL_TEXTURE_2D, textureId);
            GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
            setFilters(GL10.GL_NEAREST, GL10.GL_NEAREST);
            gl.glBindTexture(GL10.GL_TEXTURE_2D, 0);
    
            width = bitmap.getWidth();
            height = bitmap.getHeight();
    
            bitmap.recycle();
        } catch (IOException e) {
            throw new RuntimeException("Couldn't load texture '" + fileName + "'", e);
        } finally {
            if (inputStream != null) {
                try {
                    inputStream.close();
                } catch (IOException e) {
                    // do nothing
                }
            }
        }
    
        loaded = true;
    }
    
    public void reload() {
        load();
        bind();
        setFilters(minFilter, magFilter);
        glGraphics.getGL().glBindTexture(GL10.GL_TEXTURE_2D, 0);
    }
    
    public void setFilters(int minFilter, int magFilter) {
        this.minFilter = minFilter;
        this.magFilter = magFilter;
    
        GL10 gl = glGraphics.getGL();
        gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, minFilter);
        gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, magFilter);
    }
    
    public void bind() {
        GL10 gl = glGraphics.getGL();
        gl.glBindTexture(GL10.GL_TEXTURE_2D, textureId);
    }
    
    public void dispose() {
        loaded = false;
    
        GL10 gl = glGraphics.getGL();
        gl.glBindTexture(GL10.GL_TEXTURE_2D, textureId);
        int[] textureIds = { textureId };
        gl.glDeleteTextures(1, textureIds, 0);
        gl.glBindTexture(GL10.GL_TEXTURE_2D, 0);
    }
    
    public boolean isLoaded() {
        return loaded;
    }
    
    public void setLoaded(boolean loaded) {
        this.loaded = loaded;
    }
    
  2. My understanding is that I would have to provide 4 compressed textures (one for each format) and a fall back uncompressed PNG for each of my images to support a wide range of devices. My concern is the required disk size increase that this will cause. Is there any solution to this? i.e. use compressed textures in order to lower the memory usage of my game without causing the size of the game on disk to explode?

Community
  • 1
  • 1
brent777
  • 3,369
  • 1
  • 26
  • 35
  • Have you tried rendering textures from client memory? – user1095108 May 15 '14 at 20:04
  • 1
    Related Google Developer video on Texture management: https://www.youtube.com/watch?v=jHXzzHElFPk&index=15&list=PLOU2XLYxmsII5O-vSGx2S3dN-hSVc3Cs_ – Morrison Chang May 15 '14 at 20:08
  • @user1095108 I am not sure what you mean exactly. What would this approach look like? – brent777 May 15 '14 at 20:09
  • @brent777 I was thinking about calling `glTexImage2D` repeatedly on different data (coming from client memory), that is, not preloading the textures onto the GPU. – user1095108 May 15 '14 at 20:23
  • @user1095108 i'm not so keen on that approach. It's kind of side-stepping a bigger issue and I don't think it will be sustainable as the game continues to grow – brent777 May 15 '14 at 21:26
  • @MorrisonChang +1 for the link. Very interesting. I think I must be doing something wrong when creating my GPU compressed images using ARM because they were a lot larger relative to my PNGs than what the speaker was indicating – brent777 May 15 '14 at 21:29
  • @brent777 From my experience, on android devices, in general, there is no GPU memory, GPU memory = CPU memory. So the approach may not be so bad. – user1095108 May 16 '14 at 04:12

1 Answers1

2

Rather than providing alpha textures in 4 different compression formats, a better approach is to split the alpha from the images and use ETC1 for the color and possibly even the alpha part of the images. The tricky part is that you must separate the alpha from each image into separate texture files and then write a fragment shader for OpenGL ES that samples from these texture pairs using two samplers and re-combines them. The shader code would be like this:

uniform sampler2D   sampler_color;
uniform sampler2D   sampler_alpha;
varying vec2        texCoord;

void main()
{
    vec3 vColor = texture2D(sampler_color, texCoord);
    float fAlpha = texture2D(sampler_alpha, texCoord);
    gl_FragColor = vec4(vColor, fAlpha);
}

This will work on over 99% of Android devices and allow all of your alpha textures to be compressed, which not only makes them smaller, but they will load faster too.

ClayMontgomery
  • 2,786
  • 1
  • 15
  • 14
  • thanks @ClayMontgomery. I should've mentioned that I am using OpenGL ES 1.1. – brent777 May 25 '14 at 00:47
  • I have accepted this answer since it is the correct approach if you are using OpenGL ES 2.0 or above. I did not state that I am using 1.1 in the question. I found that if you want to / must stick with 1.1 then using ETC1 with alphas is not really possible since you need to combine the RGB channels into alpha values which isn't possible with multi-texturing as far as I know. – brent777 Jun 14 '14 at 00:43