0

I have a set of data, which is 256x256x18 byte (means 256 in width, 256 in height and every "pixel" is 18 byte). And I want to render it to a 256x256 normal RGBA picture.

I known how to do this in CPU. Besides, I have learned how to use texture2D to do some pixel work with normal RGBA pictures.

I wonder if that data can be used as a special "texture"? If so, how to store and sample it with openGL/glsl ?

//edit 20190426

some details about render every fragment:

here is the structure of 18-BYTE "pixel"

struct Element {
    struct Layer layer1;
    struct Layer layer2;
    struct Layer layer3;
    struct Layer layer4;
    uint16 id2;
}

struct Layer {
    uint8 weight;
    uint16 id1;
    uint8 light; //this will not be used
}

Meanwhile, there is a color table of about 5000 colors for id1 and a table of about 30 colors for id2;

The render algorithm is something like this:

RGBATuple renderElement(Element e) {
    RGBATuple c = colorTable2[e.id2];
    c = mix(colorTable1[e.layer4.id1], c, e.layer4.weight);
    c = mix(colorTable1[e.layer3.id1], c, e.layer3.weight);
    c = mix(colorTable1[e.layer2.id1], c, e.layer2.weight);
    c = mix(colorTable1[e.layer1.id1], c, e.layer1.weight);
    return c;
}

The data are read from file or received from network.All the Element are formed as an picture (2D matrix),they filled the first line from 0 t0 255, the second line from 256 to 511,...,that is it.

DWCarrot
  • 31
  • 2

2 Answers2

0

As you know, the CPU approach is to create a pixel conversion function of your choice that takes the data in one pixel (those 18 bytes) and converts it into an RGBA value to your liking. You then fill an array with the converted RGBA values of your dataset, then feed this array into glTexImage2D.

If you need a more interactive visualization of your data instead of a simple texture, you can use the previous approach multiple times (with different conversion functions) to generate multiple textures, each containing some channels of your data, then display the textures you need, or even interactively combine your textures using fragment shaders as desired.

However, if you are looking to simply feed OpenGL your flat array of data as a 2D texture, where each pixel data is 18 pixels long, then I believe you're out of luck. The glTexImage2D documentation specifies only pixel data formats consisting of 1 to 4 primitive data types (byte, short, int, float), and no such format uses 18 bytes per pixel. Although glPixelStore allows you to specify a custom offset between rows of your pixel data, you can't have any stride between individual pixels, so you will after all need to convert your array on the CPU beforehand.

Magma
  • 566
  • 2
  • 8
  • all right...thank you.but I wonder is there any way to do this with GPU?(for example, use OpenCL) – DWCarrot Apr 26 '19 at 01:53
  • Maybe you could provide more details about what you're trying to achieve. Where does your data come from, how often does it update, and why is conversion on CPU too slow for your purposes? – Magma Apr 26 '19 at 09:48
  • Ok,I have add some details about data format and algorithm.And about the reason for me to use GPU...hmmm...this application is built with java for some reason,and java is always too slow (I have tested it).Or maybe I have to build a c++ library and apply it with JNI ? – DWCarrot Apr 26 '19 at 13:28
  • In my experience Java is *easily* fast enough for what you're trying to do, especially since receiving the data from a network sounds like the much bigger bottleneck. – Magma Apr 29 '19 at 14:48
  • But if you *really* don't want to do these calculations on the CPU, then I suggest creating three textures for your image: one containing the weights of each pixel, one containing the `id1`s, one containing the `id2`s. Then two more 1D textures containing the color tables. Then you can do the exact same calculation in your fragment shader instead of in Java, by sampling the value of your colortable texture using the location retrieved from your id1 texture. Make sure to set all your texture min/mag filters to NEAREST. – Magma Apr 29 '19 at 14:54
  • I don't recommend that solution though, for several reasons: As you know, I don't believe that this solution is significantly faster than the CPU solution. But more importantly, I believe that the GPU-based solution is more awkward and much harder to use, extend and maintain. This is a textbook example of [premature optimization](https://en.wikipedia.org/wiki/Program_optimization#When_to_optimize), so don't do it until you're done with everything else, and even then don't do it unless your program is still slow and you *know* it's because of that. – Magma Apr 29 '19 at 15:03
  • Thank you very much, I will think it over. – DWCarrot Apr 30 '19 at 02:50
0

Yes you can but you need to use non clamped texture format otherwise the channels would clamp to <0.0,1.0> or <0,255> or whatever interval messing your custom pixel format organization... and yes GLSL is the way but without more details on the custom pixel format we can only guess how to implement it. Take a look at this:

if you look closer you will see that my texture holds the scene geometry in vector form instead of image.

So I see 2 options here:

  1. multiply resolution of the texture

    so you fetch multiple texels per fragment to sum up to your 18 BYTE. This need a slight change in the texture coordinates depending on which coordinate you multiplied (doable inside GLSL). Beware this is limited by the max texture resolution so its a good idea to multiply both x and y resolution to still be able to fit the limit with bigger textures.

  2. use multitexturing

    so you will have more textures so their texels sum up to your 18 BYTE. This however is limited by the texture units count so you can not have many textures for other stuff unless bindless texturing is used.

Community
  • 1
  • 1
Spektre
  • 49,595
  • 11
  • 110
  • 380