0

I need help understanding why I need to do a specific change to make my OpenGL project work on OS X (2019 Macbook), while without the change it works perfectly on Windows and Linux, on both ATI and NVIDIA hardware.

At some point I'm rendering to a frame buffer that is 1024 pixels wide and 1 pixel high. I need straightforward orthographic projection, so for my projection matrix I use:

glm::ortho(0.f, (float)LookupMapSize, 1.f, 0.f)

With this projection matrix on Windows and Linux, I render my line geometry and it works as expected, all pixels are written to.

On OS X however, I initially saw nothing ending up in my framebuffer, just the color I cleared it to with glClearColor and glClear. Suspecting a shader issue I set the fragment output to vec4(1) expecting an all white result, but I still saw nothing but the clear color in my framebuffer. Depth testing, blending, culling and stencils were not an issue, so it had to be that my matrices were wrong. After much fiddling, I finally figured out that all I had to change my projection matrix to, was this:

glm::ortho(0.f, (float)LookupMapSize, 0.f, 1.f)

But why? Where does this difference come from? So in Windows/Linux bottom is at 1.f, and top is at 0.f, while in OS X it's exactly the other way around. If I use the "OS X" matrix on Windows/Linux, I get the exact same bug I initially had on OS X.

Rather than just keeping this platform specific change in my code, I would like to understand what's going on.

edit: I check all my OpenGL calls automatically (glGetError), nothing returns any errors anywhere. Unfortunately the OpenGL debug functions (glDebugMessageCallback) are not available on OS X...

edit: I verified that on both OSX and Linux/Windows the results of glm::ortho are identical. So my input into OpenGL is the same on all platforms.

foddex
  • 531
  • 4
  • 14
  • what are the GLSL logs saying? my experience over last 2 decades is advanced or non standard shader use is not reliable among gfx vendor, driver versions, OS, standard features usually work well everywhere ... so you have to test on lots of different HW to be sure enough you bug free... my bet is your OS X gfx driver has some syntax issues with your shader code (usually something trivial like not suported char or literal which works on different implementations)... also what gfx is used on your MAC ? Intel has usually problems rendering to texture – Spektre Mar 05 '21 at 12:33
  • @Spektre as said I didn't need to change anything other than the orthographic projection to make it work perfectly, so I don't think it's the shader code that's wrong here (it's not doing anything special anyway). And I should probably add to my question that I check all OpenGL calls automatically, and that nothing reports any errors anywhere, thanks – foddex Mar 05 '21 at 12:59
  • macOS has pretty broken OpenGL drivers, so who knows. You can double check if you are within the OpenGL spec, but even if you are Apple won't fix the drivers. – Acorn Mar 05 '21 at 13:05
  • @Spektre it's Intel hardware btw, Intel UHD Graphics 630 1536 MB – foddex Mar 05 '21 at 13:30
  • @foddex hmm I hate Intel HD graphics ... they have the worst OpenGL drivers and Intel never repairs them for older cards ... – Spektre Mar 05 '21 at 15:36
  • It could be endianness? Does your matrix math make some assumptions about memory layout? Or maybe glm does(unlikely)? – Mudkip Hacker Mar 05 '21 at 21:28
  • @foddex how about printing the results of transform from vertex shader? and compare between working and not working machines (environments) ... see [GLSL debug prints](https://stackoverflow.com/a/44797902/2521214) ... that could hint what is going on. I do not think endianness is the problem IIRC floats are normalized to defined endianness unlike integer I would be less surprised with transposed matrix or optimized out uniform... – Spektre Mar 06 '21 at 07:39
  • @MudkipHacker i don't think endianness should be a problem in this case. The macbook has an intel i7, my other machines are also i7's or i9's, so all run the same type of hardware. Plus, i don't see how endianness would affect this in this context. – foddex Mar 06 '21 at 09:10
  • @Spektre i'll try that out once i have the time... i wish RenderDoc would work on OSX, i can really simply see those vertex shader results on Windows and Linux with RenderDoc... can't find the OpenGL Profiler app in the OSX app store, probably cause it was deprecated by Apple. And my loan Macbook runs the latest OSX, probably really slim chance i'll get that to work. Will post here once i got results. – foddex Mar 06 '21 at 09:12
  • @Spektre interestingly enough, I just figured out that my Linux laptop has a dual Intel UHD 630 / Geforce setup, but Linux doesn't recognize the nvidia card (never got that to work). So as far as i can determine (using glxinfo output) on that machine it also uses Intel hardware, and it works perfectly on it... – foddex Mar 06 '21 at 09:15
  • @foddex but with different drivers ... Intel has notoriously buggy GL implementations ... similarly like ATI had in the past before AMD took over it – Spektre Mar 06 '21 at 09:17
  • @Spektre fair point – foddex Mar 06 '21 at 09:19
  • what to heck is going on ... I got slugish responses from SO site ... last few minutes – Spektre Mar 06 '21 at 09:20
  • btw, wasn't there some sort of setting in OpenGL to determine where in a pixel you are referring to with a specific coordinate? I vaguely remember something like that, that in DirectX it was the top left corner or something, but with OpenGL it was the center of the pixel. Or was that only for when reading from textures? I thought about it cause it sounds related to this problem, but I can't find anything about it when googling on the subject. – foddex Mar 06 '21 at 09:20
  • was creating new question and edditing it and it takes forewer even click on notify icon on your responses – Spektre Mar 06 '21 at 09:20
  • @foddex Yes there was I remember it too but have no idea what glCall setups it – Spektre Mar 06 '21 at 09:21
  • https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/gl_FragCoord.xhtml – Spektre Mar 06 '21 at 09:24
  • but there are no gl function names there ... it was something from original GL 1.0 try to look in the gl.h – Spektre Mar 06 '21 at 09:25
  • @Spektre thanks for looking that up, maybe I just have to accept that it's a driver bug and move on... – foddex Mar 06 '21 at 09:29
  • This question is clearly lacking further information about how the actual rendering is done, especially which primitive type with which coordinates is going to be rendered. I can come up woth _several_ examples where the behavior you describe is _fully conformant_ to the GL specs and whether you would actually get to see your primitive on some implementations and not others is only luck. – derhass Mar 06 '21 at 13:45
  • My _hunch_ is that you just draw a line at `y=0` or `y=1` with respect to the projection matrix you did set up, which is totally wrong, correct would be `y=0.5`. But if you render to a `wx1` framebuffer for some special "lookup" generation pass anyway, it begs the question what you need an ortho matrix for at all, draw in clip space directly. – derhass Mar 06 '21 at 13:49
  • @derhass ok i thought it wasn't relevant since it didn't seem to have any influence on the problem... i will look into your suggestion and come back – foddex Mar 07 '21 at 12:30
  • @derhass "draw in clip space directly", you're extremely right, i never thought of that... i'm so used to using ortho matrices, i never even thought of not using one... – foddex Mar 07 '21 at 12:36
  • @derhass THANK YOU, now that i render in clip space i have one code base that works everywhere... i was rendering my line with the ortho matrix on y=0, can you enlighten me why this is wrong, and why it should be on y=0.5? – foddex Mar 07 '21 at 12:45

1 Answers1

1

OpenGL is not specified as a pixel-exact rendering API, and GPUs and different drivers (even on the same GPU) do not produce identical output with identical inputs. However, the OpenGL specification actually makes hard requirements the implementators must fillfill, and you as user of the API can rely on.

In your case, if you set up a 1 pixel high viewport with an ortho matrix setting the y range from 0 to 1 means that y=0 will be the bottom edge of your pixel row, and 1 will be the top edge. If you draw a line exactly on an edge between two pixels, the OpenGL specification does not specify into which direction implmentations must "round" into this case, they just must do the same way of rounding all the time.

So this means that if the two options you have is y=0 and y=1, one of the two will *not draw the line (because it technically lies outside of your framebuffer), but which one is completely implementation-specific.

However, drawing lines on the edges between pixels on purpose is a bad idea, especially if one has some very specific pixels in mind to be filled by this. Setting the vertices at the center of the pixels you want to fill would make most sense, and this is y=0.5.

However, for a pass which just generates a widthx1 LUT, I don't see the need to set up any sort of transfrom matrices, you can work in untransformed clipsace and just draw from (-1,0,0,1) to (1,0,0,1). and y=0 here is fine, as that is exactly the center of your viewport.

derhass
  • 43,833
  • 2
  • 57
  • 78
  • Using your last suggestion, drawing in clip space, is by far the best solution and works perfectly for me. Thanks for the other explanation, that's very clear too! – foddex Mar 07 '21 at 14:25