1

I am clearly misunderstanding something pretty simple here to do with GLSL and all Google results point to the obvious answer that I'm not using the variable I'm trying to find and it has been optimised out - However I am using the variable in question. Consider the following very basic shaders:

Vertex shader

attribute vec2 TexCoord;
varying vec2 TexCoordA;

void main(){
    gl_Position =  gl_ModelViewProjectionMatrix * gl_Vertex;
    TexCoordA = TexCoord;
}

Fragment shader

varying vec2 TexCoordA;

void main(){
    gl_FragColor = vec3(TexCoordA.x, TexCoordA.y, 0); 
}

They compile and link fine- no errors. However using "glGetAttribLocation" returns -1 when I try and find the location of "TexCoord". If I use TexCoordA for another purpose (such as a call to "texture2D()") then I am able to find the location of TexCoord correctly.

Why does this matter you're probably asking (because why else would you use UV coords for anything other than a texture call)? I am trying to render one pixel into a frame buffer for all the UV coordinates and then read them back again on a second pass - this is the only way I can guarantee the results I'm looking for.

TL;DR Why does "glGetAttribLocation" return -1 for the above shaders given they compile and link without a problem?

Requested information about code surrounding the problem area as follows (I am loading about 20-25 other shaders the same way so I'm confident the problem isn't here):

Problem lines:

    mPassOneProgram = LoadShader("PCT_UV_CORRECTION_PASS_1.vert", "PCT_UV_CORRECTION_PASS_1.frag");
    mPassOneUVLocation = glGetAttribLocation(mPassOneProgram, "TexCoord");

Shader loader code:

GLuint LoadShader(const char *vertex_path, const char *fragment_path) {
    GLuint vertShader = glCreateShader(GL_VERTEX_SHADER);
    GLuint fragShader = glCreateShader(GL_FRAGMENT_SHADER);

    // Read shaders
    std::string vertShaderStr = readFile(vertex_path);
    std::string fragShaderStr = readFile(fragment_path);
    const char *vertShaderSrc = vertShaderStr.c_str();
    const char *fragShaderSrc = fragShaderStr.c_str();

    GLint result = GL_FALSE;
    int logLength;

    // Compile vertex shader
    std::cout << "Compiling vertex shader." << std::endl;
    glShaderSource(vertShader, 1, &vertShaderSrc, NULL);
    glCompileShader(vertShader);

    // Check vertex shader
    glGetShaderiv(vertShader, GL_COMPILE_STATUS, &result);
    glGetShaderiv(vertShader, GL_INFO_LOG_LENGTH, &logLength);
    std::vector<char> vertShaderError(logLength);
    glGetShaderInfoLog(vertShader, logLength, NULL, &vertShaderError[0]);
    std::cout << &vertShaderError[0] << std::endl;
    OutputDebugString(&vertShaderError[0]);

    // Compile fragment shader
    std::cout << "Compiling fragment shader." << std::endl;
    glShaderSource(fragShader, 1, &fragShaderSrc, NULL);
    glCompileShader(fragShader);

    // Check fragment shader
    glGetShaderiv(fragShader, GL_COMPILE_STATUS, &result);
    glGetShaderiv(fragShader, GL_INFO_LOG_LENGTH, &logLength);
    std::vector<char> fragShaderError(logLength);
    glGetShaderInfoLog(fragShader, logLength, NULL, &fragShaderError[0]);
    std::cout << &fragShaderError[0] << std::endl;
    OutputDebugString(&vertShaderError[0]);

    std::cout << "Linking program" << std::endl;
    GLuint program = glCreateProgram();
    glAttachShader(program, vertShader);
    glAttachShader(program, fragShader);
    glLinkProgram(program);

    glGetProgramiv(program, GL_LINK_STATUS, &result);
    glGetProgramiv(program, GL_INFO_LOG_LENGTH, &logLength);
    std::vector<char> programError( (logLength > 1) ? logLength : 1 );
    glGetProgramInfoLog(program, logLength, NULL, &programError[0]);
    std::cout << &programError[0] << std::endl;
    OutputDebugString(&vertShaderError[0]);

    glDeleteShader(vertShader);
    glDeleteShader(fragShader);

    return program;
}
jProg2015
  • 1,098
  • 10
  • 40
  • are you asking after linking? can you just assign before linking? can we have some of your code around where it fails? – cfrick Feb 25 '14 at 15:21
  • glsl optimizes variables if they are not used so you can't find them. however I don't see such case in your code. But your defination "If I use TexCoordA for another purpose (such as a call to "texture2D()") then I am able to find the location of TexCoord correctly. " sounds like this is the reason –  Feb 25 '14 at 15:24
  • Also isn't gl_FragColor supposed to be vec4? Maybe it is some how causing this? –  Feb 25 '14 at 15:25
  • @cfrick Added surrounding code for better understanding. – jProg2015 Feb 25 '14 at 15:30
  • @taytay The shaders you see above are exactly how they are in the current program that are giving the problem - so as you can see they are used. Changing the line to vec4(TexCoordA.x, TexCoordA.y, 0, 1) also still has the same problem. – jProg2015 Feb 25 '14 at 15:30
  • that all looks ok at first glance. i'd rather use the ``result`` at least with an assert for now. maybe be proactive about it and set the attributes forcefully (this is what I do); do before linking! have look here:http://stackoverflow.com/questions/15639957/glgetattriblocation-returns-1-when-retrieving-existing-shader-attribute – cfrick Feb 25 '14 at 15:40
  • I think you have an error in your fragment shader. If you look carefully at your logging code, you output `vertShaderError` each time. The error in your fragment shader might be using `0` and `1` instead of `0.0` and `1.0` (some compilers are fussier than others) – GuyRT Feb 25 '14 at 15:49
  • Sorry - just noticed you also output the correct logs to `cout`, so if you're looking at that output it should be correct. – GuyRT Feb 25 '14 at 15:51
  • Where are your `#version` directives? – genpfault Feb 25 '14 at 15:56
  • 1
    How do you know they compile correctly? You never check `result`. – genpfault Feb 25 '14 at 15:59
  • #version 120 but I've tried with higher (as high as 410 compatibility). - And I've checked result with debug points. – jProg2015 Feb 25 '14 at 16:06
  • @cfrick I tried your suggesting of binding the location forcefully and I get "Access violation reading location 0x0028b000." – jProg2015 Feb 25 '14 at 16:31
  • Maybe TexCoord is reserved somehow, have you tried other variable names? – Jean-Simon Brochu Feb 25 '14 at 21:18
  • 1
    Also I don't really understand why you use "varying out" as varying is the 1.2 equivalent of 1.5's out. When I try it in http://codedstructure.net/projects/webgl_shader_lab/, it does not compile – Jean-Simon Brochu Feb 25 '14 at 21:24

2 Answers2

1

Managed to solve this by doing

gl_FrontColor = vec3(TexCoord.x, TexCoord.y, 0)

in the Vertex shader and

gl_FragColor = gl_Color;

in the Fragment shader.

Which is essentially the same thing and I still don't understand why it wasn't working before. I'm gonna put this one down to a bug in the compiler as nobody else seems to be able to find a problem.

jProg2015
  • 1,098
  • 10
  • 40
0
glGetShaderiv(vertShader, GL_COMPILE_STATUS, &result);
...
glGetShaderiv(fragShader, GL_COMPILE_STATUS, &result);
...
glGetProgramiv(program, GL_LINK_STATUS, &result);

Each of these should be followed by a check to ensure that result is equal to GL_TRUE, otherwise the shader hasn't properly compiled. See here for a complete shader / program set of classes.

Jherico
  • 28,584
  • 8
  • 61
  • 87