1

I'm trying to create a dynamic array of arrays (of arrays). But for some reason the data gets corrupted. I'm using the data to generate a texture in a OpenGL application.

The following code works fine:

unsigned char imageData[64][64][3];
    for (int i = 0; i < 64; i++)
    {
        for (int j = 0; j < 64; j++)
        {
            unsigned char r = 0, g = 0, b = 0;
            if (i < 32)
            {
                if (j < 32)
                    r = 255;
                else
                    b = 255;
            }
            else
            {
                if (j < 32)
                    g = 255;
            }
            imageData[i][j][0] = r;
            imageData[i][j][1] = g;
            imageData[i][j][2] = b;
        }
        std::cout << std::endl;
    }

    glTexImage2D(target, 0, GL_RGB, 64, 64, 0, GL_RGB, GL_UNSIGNED_BYTE, imageData);

Problem is, I want to be able to create a texture of any size (not just 64*64). So I'm trying this:

unsigned char*** imageData = new unsigned char**[64]();
for (int i = 0; i < 64; i++)
{
    imageData[i] = new unsigned char*[64]();
    for (int j = 0; j < 64; j++)
    {
        imageData[i][j] = new unsigned char[3]();
        unsigned char r = 0, g = 0, b = 0;
        if (i < 32)
        {
            if (j < 32)
                r = 255;
            else
                b = 255;
        }
        else
        {
            if (j < 32)
                g = 255;
        }
        imageData[i][j][0] = r;
        imageData[i][j][1] = g;
        imageData[i][j][2] = b;
    }
    std::cout << std::endl;
}

glTexImage2D(target, 0, GL_RGB, 64, 64, 0, GL_RGB, GL_UNSIGNED_BYTE, imageData);

But that doesn't work, the image gets all messed up so I assume I'm creating the array of arrays (of arrays) incorrectly? What am I doing wrong?

Also, I guess I should be using vectors instead. But how can I cast the vector of vectors of vectors data into a (void *) ?

birgersp
  • 3,909
  • 8
  • 39
  • 79
  • what do you mean by gets all messed up? what happens exactly? – PYA Jul 07 '17 at 14:36
  • 1
    unsigned char* imageData = new unsigned char[width*height*3]; – nullqube Jul 07 '17 at 14:36
  • @pyjg: see my edit, the colors are not showing as expected. Not sure how to describe it better than that, I could upload a screenshot? – birgersp Jul 07 '17 at 14:39
  • @nullqube if I create the array like that I cannot assign to it by x,y,channel indices. What are you suggesting, exactly? – birgersp Jul 07 '17 at 14:41
  • @Birger i think he is suggesting creating a 1 dimensional array instead of a three dimensional array, but you would have to change your "indexing" logic for it to work. – PYA Jul 07 '17 at 14:46
  • Okay. Changing the indexing method is not a problem for me, I just want something that can generate a texture (of any size) programmatically. – birgersp Jul 07 '17 at 14:50
  • read this https://www.khronos.org/opengl/wiki/Example/Texture_Array_Creation. you now right just change your unsigned char* imageData to unsigned int* & new unsigned int but keep the pixel the same way it is. – nullqube Jul 07 '17 at 16:36
  • ALSO it was (y*WIDTH) + x . each line is a row of pixels , that's equal to WIDTH ( each line is WIDTH length exactly) – nullqube Jul 07 '17 at 16:38

2 Answers2

3

This line contains multiple bugs:

unsigned char* pixel = &(imageData[(y * height) + x]);

You should multiply x by height and add y. And there's also the fact that each pixel is actually 3 bytes. Some issues that led to this bug in your code (and will lead to to others)

  • You should also be using std::vector. You can call std::vector::data to get a pointer to the underlying data to interface to C API's.
  • You should have a class that represents a pixel. This will handle the offsetting correctly and give things names and made the code clearer.
  • Whenever you are working with a multi dimensional array that you encode into a single dimensional one, you should try to carefully write an access function that takes care of indexing so you can test it separately.

(end bulleted list... oh SO).

struct Pixel {
    unsigned char red;
    unsigned char blue;
    unsigned char green;
};

struct TwoDimPixelArray {
    TwoDimArray(int width, int height)
      : m_width(width), m_height(height)
    {
        m_vector.resize(m_width * m_height);
    }

    Pixel& get(int x, int y) {
        return m_vector[x*height + y];
    }

    Pixel* data() { return m_vector.data(); }    

private:
    int m_width;
    int m_height;
    std::vector<Pixel> m_vector;
}

int width = 64;
int height = 64;

TwoDimPixelArray imageData(width, height);

for (int x = 0; x != width ; ++ x) {
    for (int y = 0; y != height ; ++y) {    
        auto& pixel = imageData.get(x, y);

        // ... pixel.red = something, pixel.blue = something, etc
    }
}

glTexImage2D(target, 0, GL_RGB, 64, 64, 0, GL_RGB, GL_UNSIGNED_BYTE, imageData.data());
Nir Friedman
  • 17,108
  • 2
  • 44
  • 72
  • 1
    The important part is that this design is OO design, with data holders being replaced by proper classes. Unlike an array of arrays which can be anything. Have my upvote. – Aziuth Jul 07 '17 at 15:32
  • ALSO it was (y*WIDTH) + x . each line is a row of pixels , that's equal to WIDTH ( each line is WIDTH length exactly) – nullqube Jul 07 '17 at 16:39
  • @nullqube I mean it depends entirely on whether you want to store as row-major or column-major form, but sure. I agree that your formula leads to consistent results, as does mine, but not the one in the original question. – Nir Friedman Jul 07 '17 at 17:21
  • 1
    Not just helping me out with my question, but showing me what seems to me as very good coding practice. Thank you very much – birgersp Jul 09 '17 at 15:58
  • Just a sidenote someone might find useful: I've learned that OpenGL stores images starting with the lower left pixel (and then going right), while images are (usually) stored starting with the upper left (and then going right). So in my code the index of a pixel is actually computed like so: `((height - y - 1) * width + x)` – birgersp Jul 12 '17 at 08:24
0

You need to use continuous memory for it to work with opengl. My solution is inspired by previous answers, with a different indexing system

unsigned char* imageData = new unsigned char[width*height*3];
unsigned char r, g, b;
const unsigned int row_size_bytes = width * 3;

for( unsigned int x = 0; x < width; x++ ) {
   unsigned int current_row_offset_bytes = x * 3;
   for( unsigned int y = 0; y < height; y++ ) {
      unsigned int one_dim_offset = y * row_size_bytes + current_row_offset_bytes
      unsigned char* pixel = &(imageData[one_dim_offset]);
      pixel[0] = r;
      pixel[1] = g;
      pixel[2] = b;
   }
}

Unfortunnately it's untested, but i'm confident assuming sizeof(char) is 1.

Tezirg
  • 1,629
  • 1
  • 10
  • 20