0

I'm currently working on a particle system for my game. I want to handle basic collision with static and simple geometry. To do that, I generate an array of vector4 describing a segment (two points with XY coordinates). Then in my vertex shader I use it to make my intersection test.

The issue is that it works when I run the game on my NVidia GPU but not on my Intel Chipset. So I guess something is wrong with my implementation since NVidia drivers are very flexible.

OpenGL calls

// COLLIDER_BLOCK_INDEX = 0

// glGenBuffers - glBufferData

// Bind UBO to shader
GLuint block_index = glGetUniformBlockIndex(program, "collider_data");
glUniformBlockBinding(program, block_index, COLLIDER_BLOCK_INDEX);
glBindBufferRange(GL_UNIFORM_BUFFER, COLLIDER_BLOCK_INDEX,
                  _uboCollider, 0, 64 * sizeof(GLfloat) * 4);

// Send data on GPU
glBufferSubData(GL_UNIFORM_BUFFER, 0, 64 * sizeof(GLfloat) * 4, _collider);

// Bind UBO before using it
glBindBufferBase(GL_UNIFORM_BUFFER, COLLIDER_BLOCK_INDEX, _uboCollider);

My data declaration

glm::vec4 _collider[64];

The vertex shader (simplified)

#version 330
layout(std140) uniform collider_data {
vec4 collider[64];
};

I read data as:

vec2(collider[i][0], collider[i][1]) // A point
vec2(collider[i][2], collider[i][3]) // B point

The issue

So as I said, everything works fine on NVidia GPU, but not on the Intel chipset. Do someone see something wrong on this code ? I'm pretty sure my issue comes from this UBO because when I totally remove it, everything works fine with both drivers.

Also, I don't have any OpenGL errors (glGetError()).

Intel chipset specs

OpenGL version [4.2.0 - Build 10.18.10.3574]
Shader version [4.20 - Build 10.18.10.3574]
GL_MAX_FRAGMENT_UNIFORM_BLOCKS [14]
GL_MAX_UNIFORM_BLOCK_SIZE [16384]
GL_UNIFORM_BUFFER_OFFSET_ALIGNMENT [16]
  • I'm ok with downvote, could you at least explain why my post is not correct ? –  Oct 01 '15 at 19:02
  • I note your question is (not unreasonably) attracting "close" votes for not being minimal enough. Can you reduce it down to a few gl calls just exercising the UBO stuff that demonstrates the issue? (If you can't... that fact itself is interesting information). Also, I'd hope to see some mention of what glGetError is returning in questions like this; have you checked that? – timday Oct 01 '15 at 19:58
  • 2
    I'm going to try to reproduce my issue in a minimal code execution and I will post a new topic then. Thanks for the feedback –  Oct 01 '15 at 22:35
  • 1
    BTW I've run into a couple of "works on Nvidia, fails on Intel" OpenGL issues myself (on Mac HW). In both cases the code was incorrect... I get the general impression NVidia's OpenGL is more tolerant of mistakes in OpenGL usage and will often still manage to render what was intended, while Intel seems less robust against misuse. Don't really have enough experience/evidence to be sure though. – timday Oct 01 '15 at 23:53
  • 1
    `64 x 4 x float` is a lot of `uniforms` My guess is not any gfx can handle that (due to GPU register file limitations) try to use lower count like 8 if it is working as should. if not the problem is elswhere. The Intel driver for openGL especially for the older cards are a mess... many times they are sensitive to unrelated things . `glGetError` is worthless for GLSL you need to use [glGetShaderInfoLog](http://stackoverflow.com/a/31913542/2521214) for each part of your shader the resulting log will tell you what is wrong (especially the warnings) – Spektre Oct 02 '15 at 07:15
  • So I've done a minimal program using UBO's. I send 256 * 4 float on the chipset and it works. God damn it :( For anyone interested by the program here is the link: http://pastebin.com/PJjXvQV7 –  Oct 02 '15 at 18:40

0 Answers0