3

Currently, I'm able to load in a static sized texture which I have created. In this case it's 512 x 512.

This code is from the header:

#define TEXTURE_WIDTH 512
#define TEXTURE_HEIGHT 512

GLubyte textureArray[TEXTURE_HEIGHT][TEXTURE_WIDTH][4];

Here's the usage of glTexImage2D:

glTexImage2D(
    GL_TEXTURE_2D, 0, GL_RGBA,
    TEXTURE_WIDTH, TEXTURE_HEIGHT,
    0, GL_RGBA, GL_UNSIGNED_BYTE, textureArray);

And here's how I'm populating the array (rough example, not exact copy from my code):

for (int i = 0; i < getTexturePixelCount(); i++)
{
    textureArray[column][row][0] = (GLubyte)pixelValue1;
    textureArray[column][row][1] = (GLubyte)pixelValue2;
    textureArray[column][row][2] = (GLubyte)pixelValue3;
    textureArray[column][row][3] = (GLubyte)pixelValue4;
}

How do I change that so that there's no need for TEXTURE_WIDTH and TEXTURE_HEIGHT? Perhaps I could use a pointer style array and dynamically allocate the memory...

Edit:

I think I see the problem, in C++ it can't really be done. The work around as pointed out by Budric is to use a single dimensional array but use all 3 dimensions multiplied to represent what would be the indexes:

GLbyte *array = new GLbyte[xMax * yMax * zMax];

And to access, for example x/y/z of 1/2/3, you'd need to do:

GLbyte byte = array[1 * 2 * 3];

However, the problem is, I don't think the glTexImage2D function supports this. Can anyone think of a workaround that would work with this OpenGL function?

Edit 2:

Attention OpenGL developers, this can be overcome by using a single dimensional array of pixels...

[0]: column 0 > [1]: row 0 > [2]: channel 0 ... n > [n]: row 1 ... n > [n]: column 1 .. n

... no need to use a 3 dimensional array. In this case I've had to use this work around as 3 dimensional arrays are apparently not strictly possible in C++.

Community
  • 1
  • 1
Nick Bolton
  • 38,276
  • 70
  • 174
  • 242
  • 1
    You don't simply multiply the indices. It's (rowIndex * numColumns * numColourComponents + columnIndex + colourComponent); – Budric Mar 25 '09 at 14:56
  • 1
    Oops. Should be (rowIndex * numColumns * numColourComponents + columnIndex*numColourComponents + colourComponent); – Budric Mar 25 '09 at 14:57

3 Answers3

5

Ok since this took me ages to figure this out, here it is:

My task was to implement the example from the OpenGL Red Book (9-1, p373, 5th Ed.) with a dynamic texture array.

The example uses:

static GLubyte checkImage[checkImageHeight][checkImageWidth][4];

Trying to allocate a 3-dimensional array, as you would guess, won't do the job. Someth. like this does NOT work:

GLubyte***checkImage;
checkImage = new GLubyte**[HEIGHT];

for (int i = 0; i < HEIGHT; ++i)
{
  checkImage[i] = new GLubyte*[WIDTH];

  for (int j = 0; j < WIDTH; ++j)
    checkImage[i][j] = new GLubyte[DEPTH];
}

You have to use a one dimensional array:

unsigned int depth = 4;

GLubyte *checkImage = new GLubyte[height * width * depth];

You can access the elements using this loops:

for(unsigned int ix = 0; ix < height; ++ix)
{
  for(unsigned int iy = 0; iy < width; ++iy)
  {
    int c = (((ix&0x8) == 0) ^ ((iy&0x8)) == 0) * 255;

    checkImage[ix * width * depth + iy * depth + 0] = c;   //red
    checkImage[ix * width * depth + iy * depth + 1] = c;   //green
    checkImage[ix * width * depth + iy * depth + 2] = c;   //blue
    checkImage[ix * width * depth + iy * depth + 3] = 255; //alpha
  }
}

Don't forget to delete it properly:

delete [] checkImage;

Hope this helps...

Xcessity
  • 529
  • 6
  • 12
4

You can use

int width = 1024;
int height = 1024;
GLubyte * texture = new GLubyte[4*width*height];
...
glTexImage2D(
    GL_TEXTURE_2D, 0, GL_RGBA,
    width, height,
    0, GL_RGBA, GL_UNSIGNED_BYTE, textureArray);
delete [] texture;         //remove the un-needed local copy of the texture;

However you still need to specify the width and height to OpenGL in glTexImage2D call. This call copies texture data and that data is managed by OpenGL. You can delete, resize, change your original texture array all you want and it won't make a different to the texture you specified to OpenGL.

Edit: C/C++ deals with only 1 dimensional arrays. The fact that you can do texture[a][b] is hidden and converted by the compiler at compile time. The compiler must know the number of columns and will do texture[a*cols + b].

Use a class to hide the allocation, access to the texture.

For academic purposes, if you really want dynamic multi dimensional arrays the following should work:

int rows = 16, cols = 16;
char * storage = new char[rows * cols];
char ** accessor2D = new char *[rows];
for (int i = 0; i < rows; i++)
{
    accessor2D[i] = storage + i*cols;
}
accessor2D[5][5] = 2;
assert(storage[5*cols + 5] == accessor2D[5][5]);
delete [] accessor2D;
delete [] storage;

Notice that in all the cases I'm using 1D arrays. They are just arrays of pointers, and array of pointers to pointers. There's memory overhead to this. Also this is done for 2D array without colour components. For 3D dereferencing this gets really messy. Don't use this in your code.

Budric
  • 3,599
  • 8
  • 35
  • 38
  • Hmm, this would work, only when assigning values to the array, I get this error: invalid types ‘unsigned char[int]’ for array subscript – Nick Bolton Mar 25 '09 at 01:50
  • Are you using texture[i][j]? You can't use that with the code above. texture[row * width + column]. – Budric Mar 25 '09 at 14:07
  • @Budric, my array needs to have 3 dimensions, not 2. I'm accessing it in the style of texture[column][row][channel] – Nick Bolton Mar 26 '09 at 15:27
1

You could always wrap it up in a class. If you are loading the image from a file you get the height and width out with the rest of the data (how else could you use the file?), you could store them in a class that wraps the file loading instead of using preprocessor defines. Something like:

class ImageLoader
{
...
  ImageLoader(const char* filename, ...);
...
  int GetHeight();
  int GetWidth();
  void* GetDataPointer();
...
};

Even better you could hide the function calls to glTexImage2d in there with it.

class GLImageLoader
{
...
  ImageLoader(const char* filename, ...);
...
  GLuint LoadToTexture2D(); // returns texture id
...
};
jheriko
  • 3,043
  • 1
  • 21
  • 28