2

I have been using OpenGL to display HDR content following explanation from nvidia: https://on-demand.gputechconf.com/gtc/2017/presentation/s7394-tom-true-programming-for-high-dynamic-range.pdf

And it works great, but only on nVidia GPUs. Using the same method: Specify WGL_PIXEL_TYPE_ARB = WGL_TYPE_RGBA_FLOAT_ARB with color depth 16 (WGL_RED_BITS_ARB = 16, WGL_GREEN_BITS_ARB = 16, WGL_BLUE_BITS_ARB = 16)

On AMD GPUs it displays SDR image. That is to say it clamps the fragment shader values to 1.0, while on nvidia gpus it allows values to ~25.0 (or 10.000 nits as i understand it), and displays it correctly. This is using the same TV (LG B9) and same OS (Wind 10).

Note that other apps, like Chrome displays HDR content correctly on AMD gpus, and directX tests apps also.

Tried bunch of different AMD GPUs, drivers settings texture formats, pixel types etc, with no luck.

Read thru whole https://gpuopen.com/ for clues, no luck. Anyone have an idea or example how to create a proper OpenGL HDR Context/configuration?

I'll try an minimal example here, but its part of larger process and in Delphi, so it will be for orientation only

const
  PixelaAttribList: array[0..20] of Integer =(  //
    WGL_DRAW_TO_WINDOW_ARB, 1, //
    WGL_DOUBLE_BUFFER_ARB, 1,        //
    WGL_SUPPORT_OPENGL_ARB, 1,     //
    WGL_ACCELERATION_ARB, WGL_FULL_ACCELERATION_ARB,  //
    WGL_SWAP_METHOD_ARB, WGL_SWAP_EXCHANGE_ARB, //
    WGL_PIXEL_TYPE_ARB, WGL_TYPE_RGBA_FLOAT_ARB, //
    WGL_RED_BITS_ARB, 16,                          //
    WGL_GREEN_BITS_ARB, 16,                          //
    WGL_BLUE_BITS_ARB, 16,                              //
    WGL_ALPHA_BITS_ARB, 0,                            //
    0);
var
  piFormats: PGLint;
Begin
  wglChoosePixelFormatARB(DC, @PixelaAttribList, NIL, 100, piFormats, @nNumFormats);
  if nNumFormats = 0 then
    exit;
  if not SetPixelFormat(DC, piFormats^, nil) then
    exit;
  hrc:= wglCreateContextAttribsARB(DC, 0, nil);
  if Result <> 0 then
    ActivateRenderingContext(DC, hrc);

After the code i tested format with wglGetPixelFormatAttribivARB and I get 16 bit per color, so exactly whats needed.

Fragment shader is simple:

gl_FragColor = vec4(25.0,25.0,25.0,1.0);

Regards

Milika
  • 63
  • 1
  • 8
  • 2
    its very common that shaders behave differently on Intel,AMD and nVidia and my experience is that usually nVidia works 99% without a problem , 10% that something works on Intel and AMD is veeeery picky about what you do, they tend to work "properly" only with specific datatypes, pixelformats ... One wrong settings and all goes to hell. Also machines using Win10 and GPU inside CPU gives me a headache as they have bad drivers at least those I come in contact with making GLSL virtually useless... The usual workaround is to have separate shader and CPU code for each vendor of gfx ... – Spektre Mar 12 '22 at 06:33
  • 2
    so for AMD check your datatypes and pixelformats, especially if you have VBO/VAO with indices IIRC they must be 32bit unsigned integer anything alse might crash even BSOD ... rendering context buffers must be set according to GL supported pixelformats as they dropped backward compatibility many years ago ... but do not get me wrong AMD is much much better that original ATI its just sad they shifted to bad manners like requiring .dot net for driver etc ... Also did you check GLSL logs? they usually hint what is going wrong with your shader – Spektre Mar 12 '22 at 06:39
  • 2
    if the problem is just your texture pixelformat is clamping you can quickly check with this [GLSL debug prints](https://stackoverflow.com/a/44797902/2521214) simply put unclamped values inside tested texture (like -5.3, +1234.567,...) and print from fragment shader what is really inside at some fixed location then just make app that do this for all texture formats you got and test which one is not clamping (I remember I got a problem that some formats clamps that should not) ... IIRC there are also gl functions to disable clamping but those did not work for me when I needed them ... – Spektre Mar 12 '22 at 06:49
  • 1
    btw from quick look at your code I see you never check what pixelformat your context really has ... see [Getting a window's pixel format](https://stackoverflow.com/a/50248477/2521214) maybe you just force some format your AMD does not have so it chooses something else... also see [What is the proper OpenGL initialisation on Intel HD 3000?](https://stackoverflow.com/q/19099162/2521214) you can adapt it to test/chose your HDR compatible formats instead of standard ones – Spektre Mar 12 '22 at 11:46
  • Thanks for trying to answer, and I have tested everything you pointed out. No textures are used to avoid any problem, I'm just using `glsl FragColor = vec4(25.0,25.0,25.0,1.0);` directly. This avoids transfer problems, while testing. Also all pixel formats are retested after setting, i just omitted in the example, simplicity sake. They are all set perfectly. – Milika Mar 13 '22 at 00:35
  • 1
    Just reviewed whole app with render doc, and all seems fine. Back buffer is 64bit (16b per color), all formats are 64bit, only thing i can think of is that, for some reason, DC of the window I am using is GDI, not DXGI and cant perhaps show HDR colors at all. Or something on that lines... – Milika Mar 13 '22 at 01:58
  • and just a silly question does the monitor and or its connection support HDR? for example I got here one LCD that is only 6bits per channel no matter what you feed it with ... cables and interfaces has their limits too ... – Spektre Mar 13 '22 at 07:33
  • Lg b9, 12bit 444 rgb via hdmi. I use the same tv for nvidia testing, and can select the exact same modes in drivers and windows, so… yes – Milika Mar 13 '22 at 14:14

0 Answers0