Most resources on the internet claim that after performing calculations in High Dynamic Range, where the values can be arbitrarily large (within type representation limits), the result must be transformed from HDR to Standard Dynamic Range, where the values are in range [0, 1] because this is the range representable by the display device.
Usually a tone-mapper is used, like an ACES style filmic tone-mapper, especially popular in PBR rendering. This is the case because the values would get clamped to the range [0, 1] anyway and anything above 1 would be ignored resulting in neither realistic or pleasing results... or would they?
I recently stumbled across the following NVIDIA article: https://developer.nvidia.com/rendering-game-hdr-display. It seems that by using NVIDIA and Windows specific APIs it is possible to output HDR values to take advantage of the capabilities of certain monitors to display a very large gamut of color and brightness.
I am wondering why isn't this really talked about. As far as I know even large game engines such as Unity and Unreal don't use this technique and instead use a tonemapper to output a SDR image.
Why is this the case? Is HDR output still in the phase of tech demos? Are there no consumer-grade displays capable of displaying it (despite many begin advertised as "HDR monitors")? Or did I understand everything completely wrong and there is nothing such as HDR output and its all just an April Fools joke?
Edit: I would also appreciate some information about a cross platform approach to use HDR output, if possible, especially in OpenGL.