5

I'm trying to upload a texture with unsigned shorts in a shader but it's not working.

I have tried the following:

glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, vbt[1]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 640, 480, 0, GL_RED, GL_UNSIGNED_SHORT, kinect_depth);
glUniform1i(ptexture1, 1);
GLenum ErrorCheckValue = glGetError();

I know I'm binding correctly the texture because I get some results by using

glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, vbt[1]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 640, 480, 0,
    GL_RG, GL_UNSIGNED_BYTE, kinect_depth);
glUniform1i(ptexture1, 1);
GLenum ErrorCheckValue = glGetError();

In particular, I get part of my values in the red channel. I would like to upload the texture as a unsigned byte or as a float. However I don't manage to get the glTexImage2D call correctly. Also, is it possible to something similar using a depth texture? I would like to do some operations on the depth information I get from a kinect and display it.

eaponte
  • 409
  • 5
  • 15
  • What's the type of `kinect_depth`? Why does your second example use `GL_RG` while your first uses `GL_RGB`? – Colonel Thirty Two May 01 '14 at 14:38
  • If you want to access the data in the shader as unsigned shorts, you need to store it that way in the texture, specifying that as the internal format (the third parameter). You probably want to use `GL_R16UI`, `GL_RG16UI` or `GL_RGB16UI` depending on the number of channels you have. – GuyRT May 01 '14 at 15:11
  • `GL_UNSIGNED_SHORT` in your call to `glTexImage2D (...)` has ***nothing*** to do with how the GPU stores your texture. That is only used by GL when it *reads* your image data, so it knows how to interpret the pixels. Chances are pretty good that `GL_RGB` (which is very vague as it lacks a size) is going to turn out to be 8-bit unsigned normalized (`GL_RGB8`). – Andon M. Coleman May 01 '14 at 15:40
  • @ColonelThirtyTwo the secod case is a hack to make sure that I'm binding the textures properly. The data type is an uint16. – eaponte May 01 '14 at 21:33

1 Answers1

12

Your arguments to glTexImage2D are inconsistent. The 3rd argument (GL_RGB) suggests that you want a 3 component texture, the 7th (GL_RED) suggests a one-component texture. Then your other attempt uses GL_RG, which suggests 2 components.

You need to use an internal texture format that stores unsigned shorts, like GL_RGB16UI.

If you want one component, your call would look like this:

glTexImage2D(GL_TEXTURE_2D, 0, GL_R16UI, 640, 480, 0, GL_RED_INTEGER, GL_UNSIGNED_SHORT, kinect_depth);

If you want three components:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16UI, 640, 480, 0, GL_RGB_INTEGER, GL_UNSIGNED_SHORT, kinect_depth);

You also need to make sure that the types used in your shader for sampling the texture match the type of the data stored in the texture. In this example, since you use a 2D texture containing unsigned integer values, your sampler type should be usampler2D, and you want to store the result of the sampling operation (result of texture() call in the shader) in a variable of type uvec4. (paragraph added based on suggestion by Andon)

Some more background on the format/type arguments of glTexImage2D, since this is a source of fairly frequent misunderstandings:

The 3rd argument (internalFormat) is the format of the data that your OpenGL implementation will store in the texture (or at least the closest possible if the hardware does not support the exact format), and that will be used when you sample from the texture.

The last 3 arguments (format, type, data) belong together. format and type describe what is in data, i.e. they describe the data you pass into the glTexImage2D call.

It is mostly a good idea to keep the two formats matched. Like in this case, the data you pass in is GL_UNSIGNED_SHORT, and the internal format GL_R16UI contains unsigned short values. In OpenGL ES it is required for the internal format to match format/type. Full OpenGL does conversion if necessary, which is undesirable for performance reasons, and also frequently not what you want because the precision of the data in the texture won't be the same as the precision of your original data.

Reto Koradi
  • 53,228
  • 8
  • 93
  • 133
  • You might want to add to this, information about required changes to a GLSL shader in order to actually use an integer texture. (*e.g.* `usampler2D` and then the returned color would be stored in a `uvec4`). I am not entirely convinced the OP needs/wants an integer texture, normalized fixed-point seems adequate for depth calculations. – Andon M. Coleman May 01 '14 at 18:12
  • @Andon: Good suggestion, I'll add that. I had just added some other detail. Adding the sampling aspect as well will make the answer more complete. – Reto Koradi May 01 '14 at 18:25
  • @Andon: Ok, done, thanks. BTW, I noticed while doing some fact checking that newer GL versions also have format values like `GL_RED_INTEGER` and `GL_RGB_INTEGER`. I didn't get around to digging into the full details of what those are about. It looks like they should actually be used in this case. – Reto Koradi May 01 '14 at 18:39
  • Yes and no. `GL_RED_INTEGER` tells GL to keep the exact integer value rather than normalizing. Say for instance you have `GL_R16UI` and use `GL_RED` and `GL_UNSIGNED_BYTE`... a value of **255** will be normalized to **1.0** in this case, and then that value of **1.0** maps to **65535** in a 16-bit unsigned integer texture. If you use `GL_RED_INTEGER`, that value of **255** stays **255**. This is a quirk related to pixel transfer; GL ES, which does not support data conversion during pixel transfer absolutely requires `GL_R16UI` to be paired with `GL_RED_INTEGER` or it will create an error. – Andon M. Coleman May 01 '14 at 20:28
  • I'm not sure I fully understood the answer. It seems to be important to keep the internalFormat and the format consistent, but at the same time it seems not to be compulsory and probably not the source of my problem, which is, I only get zeros in the shader. On the other hand, I did tried GL_R16UI but my laptop return an error (Invalid Operation from the gluErrorString). I forgot to mention that in my question. – eaponte May 01 '14 at 20:55
  • Actually I cannot use usampler2d in glsl. I get an error and my program stops. The problematic line is uniform usampler2D tex1. I use opengl 3.3 with mesa 10.1.0 on linux. – eaponte May 01 '14 at 21:10
  • `GL_R16UI` is a required format since OpenGL 3.0. Not sure why that would give you an error if you're using 3.3. On the shader, what `#version` directive are you using as the first line of your GLSL code? – Reto Koradi May 01 '14 at 21:17
  • @RetoKoradi I'm using # version 330. The line glTexImage2D(GL_TEXTURE_2D, 0, GL_R16UI, 640, 480, 0, GL_RED, GL_UNSIGNED_SHORT, kinect_depth); produces an error. The line glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, 640, 480, 0, GL_RED, GL_UNSIGNED_SHORT, kinect_depth); does not produce an error. – eaponte May 01 '14 at 21:25
  • Honestly, you probably don't want to use an integer texture for storing depth from a Kinect camera. You can easily work with normalized depth, and that does not require DX10+ hardware. – Andon M. Coleman May 01 '14 at 21:32
  • @AndonM.Coleman So the problem is that probably my hardware doesn't allow me to read uint16? I would like to use a depth buffer but I could not make that work. On top of that I would have to cast all my values to float. – eaponte May 01 '14 at 21:43
  • You don't have to cast anything to float. GL does fixed-point to floating-point conversion for you when it transfers pixels into an `R{G|B|A}` format. That is to say, a value of **65535** is divided by the maximum value for `GL_UNSIGNED_SHORT` (65535) and this is the floating-point value (**1.0**). If you casted your data, that would be an entirely different operation. Basically, I think you are having trouble understanding the difference between a texture's internal format and the Pixel Transfer operations GL does with ***input*** image data. Input pixels are converted to the internal format. – Andon M. Coleman May 01 '14 at 21:45