2

I'm not fully satisfied with the quality obtained with the mipmap automatically generated with this line of code:

glTexParameterf(GL10.GL_TEXTURE_2D, GL11.GL_GENERATE_MIPMAP, GL10.GL_TRUE);

I thought to create (with Gimp) various scaled version of my texture for every texture used in my game. For example for a texture for a ball I will have:

ball256.png 256x256 px

ball128.png 128x128 px

ball64.png 64x64 px

ball32.png 32x32 px

ball16.png 16x16 px

1. Do you think is a good idea?

2. How can I create a single mipmapped texture from all these images?

VanDir
  • 1,980
  • 3
  • 23
  • 41

1 Answers1

6

This is not only a good idea, but it is a pretty standard practice (particularly in Direct3D)!

OpenGL implementations tend to use a standard box filter (uniformly weighted) when you generate mipmaps. You can use a nice tent fiter (bilinear) or even cubic spline (bicubic) when downsampling textures in image editing suites. Personally, I would prefer a lanczos filter since this is going to be done offline.

You may already be aware of this, but Direct3D has a standard texture format known as DDS (Direct Draw Surface) which allows you to pre-compute and pre-compress every mipmap level at content creation time instead of load-time. This decreases compressed texture load time and more importantly allows for much higher quality sample reconstruction during downsampling into each LOD. OpenGL also has a standardized format that does the same thing: KTX. I brought up Direct3D because although OpenGL has a standardized format very few people seem to know about it; DDS is much more familiar to most people.

If you do not want to use the standardized format I mentioned above, you can always load your levels of detail one-at-a-time manually by calling glTexImage2D (..., n, ...) where n is the LOD (0 is the highest resolution level-of-detail). You would do this in a loop for each LOD when you create your texture, this is actually how things like gluBuild2DMipmaps (...) work.

Community
  • 1
  • 1
Andon M. Coleman
  • 42,359
  • 2
  • 81
  • 106
  • 1
    Surely generating mipmaps using a nearest neighbour filter is 100% useless, because it defeats the entire purpose of using mipmaps? Does any implementation actually do that by default? – Karu Apr 30 '14 at 05:12
  • 1
    Yes, in fact most do. It in no way defeats the purpose of mipmapping. The filter used to generate the texels for LODs is very different from the filter used when sampling from an LOD. Mipmapping produces texels that are closer in size to the area covered by a fragment, so applying a linear filter (weighted avg of 4 nearest texels) to LODs samples from a larger effective area of the image (irrespective of how the larger texels were computed). Consider the diagram [here](http://stackoverflow.com/questions/19123706/opengl-directx-how-does-mipmapping-improve-performance/19126515#19126515). – Andon M. Coleman Apr 30 '14 at 16:16
  • But if the LOD was built using nearest neighbour sampling, you're not sampling from a larger effective area of the original image; it's still just 4 texels. It's just a different effective area because it's not 4 neighbouring texels. I guess for some textures this would give a small improvement in image quality, but it's still the case that in a 4x reduced LOD, 15/16 of the original texel colors aren't represented at all. Bilinear filtering on a texture that has itself been point sampled is not a whole lot better than simply point sampling the original texture. – Karu Apr 30 '14 at 22:34
  • Each LOD is 1/4 the resolution of the prior. If you take the 4 nearest samples at LOD0, that covers the same area as a ***single*** sample of LOD1. Now, if you take the 4 nearest samples at LOD1 that covers the same area as 16 samples at LOD0. Hardware always takes the 4 nearest samples for linear minification, at higher LODs those 4 samples cover a larger piece of the total image and give more accurate results. You will lose detail from the image if your sample area is too small, and that is what happens if you always use the same LOD. It has nothing to do with how the LODs were calculated. – Andon M. Coleman Apr 30 '14 at 23:08
  • You can improve image quality even farther if you use a higher quality filter to generate the 1/4 resolution LODs, but that does not change the fact that implementations tend to use a box filter by default and that even this can very noticeably improve image quality and cache performance. – Andon M. Coleman Apr 30 '14 at 23:09
  • If LOD1 is created by nearest-neighbour sampling LOD0, one sample at LOD1 covers the same area as one sample at LOD0 - that is, one original texel. Taking the 4 nearest samples from LOD1 covers the same area as 4 samples from LOD0, not 16. It can't possibly cover the same area as 16 samples from LOD0, because 3/4 of LOD0's texels were completely ignored when creating LOD1. – Karu Apr 30 '14 at 23:15
  • You are confusing point filter with box filter, by the way. A point filter will take the nearest neighbor and that is the end of it. A box filter will take the 4 nearest neighbors, add them up and average them. `GL_NEAREST` is a point filter, `GL_LINEAR` is a tent filter, it adds a weighted average step to the same process used by box filtering. – Andon M. Coleman Apr 30 '14 at 23:16
  • 1
    Ah! I understand. I thought you were talking about point filtering (nearest neighbour) all along. I've never heard of "box filter". At any rate it seems equivalent to bilinear when scaling down by power-of-two factors. – Karu Apr 30 '14 at 23:21