1

I’m working on an app that creates it’s own texture atlas. The elements on the atlas can vary in size but are placed in a grid pattern.

It’s all working fine except for the fact that when I write over the section of the atlas with a new element (the data from an NSImage), the image is shifted a pixel to the right.

The code I’m using to write the pixels onto the atlas is:

-(void)writeToPlateWithImage:(NSImage*)anImage atCoord:(MyGridPoint)gridPos;
{    
    static NSSize insetSize; //ultimately this is the size of the image in the box
    static NSSize boundingBox; //this is the size of the box that holds the image in the grid
    static CGFloat multiplier;
    multiplier = 1.0;
    NSSize plateSize = NSMakeSize(atlas.width, atlas.height);//Size of entire atlas

    MyGridPoint _gridPos;

    //make sure the column and row position is legal
    _gridPos.column= gridPos.column >= m_numOfColumns ? m_numOfColumns - 1 : gridPos.column;
    _gridPos.row = gridPos.row >= m_numOfRows ? m_numOfRows - 1 : gridPos.row;

    _gridPos.column = gridPos.column < 0 ? 0 : gridPos.column;
    _gridPos.row = gridPos.row < 0 ? 0 : gridPos.row;

    insetSize = NSMakeSize(plateSize.width / m_numOfColumns, plateSize.height / m_numOfRows);
    boundingBox = insetSize;

    //…code here to calculate the size to make anImage so that it fits into the space allowed
    //on the atlas.
    //multiplier var will hold a value that sizes up or down the image…

    insetSize.width = anImage.size.width * multiplier;
    insetSize.height = anImage.size.height * multiplier;

    //provide a padding around the image so that when mipmaps are created the image doesn’t ‘bleed’
    //if it’s the same size as the grid’s boxes.
    insetSize.width -= ((insetSize.width * (insetPadding / 100)) * 2);
    insetSize.height -= ((insetSize.height * (insetPadding / 100)) * 2);

    //roundUp() is a handy function I found somewhere (I can’t remember now)
    //that makes the first param a multiple of the the second..
    //here we make sure the image lines are aligned as it’s a RGBA so we make
    //it a multiple of 4
    insetSize.width = (CGFloat)roundUp((int)insetSize.width, 4);
    insetSize.height = (CGFloat)roundUp((int)insetSize.height, 4);

    NSImage *insetImage = [self resizeImage:[anImage copy] toSize:insetSize];
    NSData *insetData = [insetImage TIFFRepresentation];
    GLubyte *data = malloc(insetData.length);
    memcpy(data, [insetData bytes], insetData.length);
    insetImage = NULL;
    insetData = NULL;
    glEnable(GL_TEXTURE_2D);
    glBindTexture(GL_TEXTURE_2D, atlas.textureIndex);
    glPixelStorei(GL_UNPACK_ALIGNMENT, 1); //have also tried 2,4, and 8
    GLint Xplace = (GLint)(boundingBox.width * _gridPos.column) + (GLint)((boundingBox.width - insetSize.width) / 2);
    GLint Yplace = (GLint)(boundingBox.height * _gridPos.row) + (GLint)((boundingBox.height - insetSize.height) / 2);
    glTexSubImage2D(GL_TEXTURE_2D, 0, Xplace, Yplace, (GLsizei)insetSize.width, (GLsizei)insetSize.height, GL_RGBA, GL_UNSIGNED_BYTE, data);
    glGenerateMipmap(GL_TEXTURE_2D);
    free(data);
    glBindTexture(GL_TEXTURE_2D, 0);
    glGetError();
}

The images are RGBA, 8bit (as reported by PhotoShop), here's a test image I've been using: enter image description here

and here's a screen grab of the result in my app:

enter image description here

Am I unpacking the image incorrectly...? I know the resizeImage: function works as I've saved it's result to disk as well as bypassed it so the problem is somewhere in the gl-code...

EDIT: just to clarify, the section of the atlas being rendered is larger than the box diagram. So the shift is occurring withing the area that's written to with glTexSubImage2D.

EDIT 2: Sorted, finally, by offsetting the copied data that goes into the section of the atlas.

I don't fully understand why that is, perhaps it's a hack instead of a proper solution but here it is.

//resize the image to fit into the section of the atlas
NSImage *insetImage = [self resizeImage:[anImage copy] toSize:NSMakeSize(insetSize.width, insetSize.height)];
//pointer to the raw data
const void* insetDataPtr = [[insetImage TIFFRepresentation] bytes];
//for debugging, I placed the offset value next
int offset = 8;//it needed a 2 pixel (2 * 4 byte for RGBA) offset
//copy the data with the offset into a temporary data buffer
memcpy(data, insetDataPtr + offset, insetData.length - offset);
/*
.
. Calculate it's position with the texture
.
*/
//And finally overwrite the texture
glTexSubImage2D(GL_TEXTURE_2D, 0, Xplace, Yplace, (GLsizei)insetSize.width, (GLsizei)insetSize.height, GL_RGBA, GL_UNSIGNED_BYTE, data);
Todd
  • 1,770
  • 1
  • 17
  • 42
  • You may be running into the issue I answered already here: http://stackoverflow.com/a/5879551/524368 – datenwolf Jun 19 '14 at 17:53
  • Hi datenwolf, your answer seems to be related to using pixel coordinates instead of normalised, texture ones. I'm not doing that. Can you say how that will help me? – Todd Jun 20 '14 at 08:28
  • It's not really about pixel coordinates, but pixel perfect addressing of texels. This is especially important for texture atlases. A common misconception is, that many people assume texture coordinates 0 and 1 come to lie exactly on pixel centers. But in OpenGL this is not the case, texture coordinates 0 and 1 are exactly on the border between the pixels of a texture wrap. If you build your texture atlas making the 0 and 1 are on pixel centers assumption, then using the very same addressing scheme in OpenGL will lead to either a blurry picture or pixel shifts. You need to account for this. – datenwolf Jun 20 '14 at 08:34
  • Thanks datenwolf, I appreciate the way your comments are leading me in the right direction, but I'm being really stupid and not seeing the answer myself. Am I right in thinking that I should shift my overall atlas texture by 0.5 texels? – Todd Jun 20 '14 at 09:46
  • Almost. on the left/bottom side 0 is a little bit to far the left/bottom and on the right/top side 1 is a little bit to far to the right/top. What you have to do is slightly scale down your coordinate range, and then apply the shift. Why the scale down? In the other post I gave the formula `t=(x + 0.5)/N` to convert from pixel to normalized texture coordinates, but for this you must remember that x is from the range [0; N[, down to normalized coordinates this means `0 <= t < 1` If you merely apply a +0.5 pixel shift would push t beyond 1, which must not happen. – datenwolf Jun 20 '14 at 09:57
  • datenwolf: do you want to cut and paste one of your comments into an answer? I have sorted it with your help by offsetting the copied, inset image by two texels... I still don't understand how that makes a difference to a sub-section of the texture that's being rendered. – Todd Jun 25 '14 at 08:44

1 Answers1

1

You may be running into the issue I answered already here: stackoverflow.com/a/5879551/524368

It's not really about pixel coordinates, but pixel perfect addressing of texels. This is especially important for texture atlases. A common misconception is, that many people assume texture coordinates 0 and 1 come to lie exactly on pixel centers. But in OpenGL this is not the case, texture coordinates 0 and 1 are exactly on the border between the pixels of a texture wrap. If you build your texture atlas making the 0 and 1 are on pixel centers assumption, then using the very same addressing scheme in OpenGL will lead to either a blurry picture or pixel shifts. You need to account for this.

I still don't understand how that makes a difference to a sub-section of the texture that's being rendered.

It helps a lot to understand that to OpenGL textures are not so much images rather than support samples for an interpolator (hence "sampler" uniforms in shaders). So to get really crisp looking images you've to choose the texture coordinates you're sampling from in a way, so that the interpolator evaluates at exactly the position of the support samples. The position of those samples however are neither integer coordinates nor simply fractions (i/N).

Note that newer versions of GLSL provide the texture sampling function texelFetch which completely bypasses the interpolator and addresses texture pixels directly. If you need pixel perfect texturing you might find this easier to use (if available).

datenwolf
  • 159,371
  • 13
  • 185
  • 298