I'm using Geometry Shaders for Geometry Amplification. The code runs perfectly with Intel graphics both in Windows and OS X.
I change the configs to use the dedicated NVIDIA GPU from my windows machine aaaaaaaaaaand... nothing.
This code:
void testError(std::string src) {
GLenum err = glGetError();
if (err != GL_NO_ERROR){
printf("(%s) Error: %s %d\n", src.c_str(), gluErrorString(err), err);
}
}
...
printf("glIsProgram: %s\n", glIsProgram(shaderProgram)?"True":"false");
glUseProgram(shaderProgram);
testError("GOGO 111");
GLint isLinked = 0;
glGetProgramiv(shaderProgram, GL_LINK_STATUS, (int *)&isLinked);
if (isLinked == GL_FALSE)
{
GLint maxLength = 0;
glGetProgramiv(shaderProgram, GL_INFO_LOG_LENGTH, &maxLength);
//The maxLength includes the NULL character
std::vector<GLchar> infoLog(maxLength);
glGetProgramInfoLog(shaderProgram, maxLength, &maxLength, &infoLog[0]);
printf("Program Not Linked %d:\n %s\n", maxLength, infoLog);
//We don't need the program anymore.
glDeleteProgram(shaderProgram);
//Use the infoLog as you see fit.
//In this simple program, we'll just leave
return 0;
}
Outputs:
glIsProgram: True
(GOGO 111) Error: invalid operation 1282
Program Not Linked 116:
Ð
Also the Log have a strange behaviour since it is not printing nothing but the length would be 116.
Thank you.
EDIT This:
char * infoLog;
glGetProgramiv(shaderProgram, GL_INFO_LOG_LENGTH, &maxLength);
Printed out the result.
Program Not Linked 116:
Geometry info
-------------
(0) : error C6033: Hardware limitation reached, can only emit 128 vertices of this size
Which comes from:
const GLchar* geometryShaderSrc = GLSL(
layout(points) in;
layout(triangle_strip, max_vertices = 256) out;
...
It's just weird that the Intel integrated GPUS have less hardware (memory?) imitations that an NVIDIA GPU. Any solution to go around this without decreasing the vertices?