0

My understanding of the OpenGL ARB_debug_output extension is that it is designed to enable reporting of events from the driver or, if needed, from the graphical application (ie. from the CPU-run code).

Is there any way custom events could be emitted from the shaders, so as to ease their debugging? Or, if this is not possible, to piggyback on an existing, shader-triggable event?

I am aware of the very adverse impact this would have on performances, but this extension is already designed for a debug context anyway.

oparisy
  • 2,056
  • 2
  • 16
  • 17
  • There really is no such thing as a shader-triggerable event, at least not in any portable sense. Some cross-vendor HLSL tools and at least one [NV-specific GLSL tool](https://developer.nvidia.com/nvidia-nsight-visual-studio-edition) let you insert breakpoints into shaders, but that is ***way*** beyond the scope of GL, you need something much closer to the actual hardware/driver. – Andon M. Coleman May 08 '14 at 21:28

1 Answers1

1

No, there isn't.

Not only does modern GPU hardware probably not support something like this, but a fragment shader can execute hundreds of times for even one triangle, and if each one of those executions sends an event, it would very much bog everything down.

Colonel Thirty Two
  • 23,953
  • 8
  • 45
  • 85
  • wrt performance issues: that would be a debugging event triggered by a specific test/situation, and only used in a debugging setting. So this is not different from the issues asociated with a typical logging system, which we are used to cope with without slowing apps to an halt. – oparisy May 09 '14 at 05:54
  • That doesn't change the fact that this simply doesn't exist, and probably won't because shaders don't even have a string type, let alone the hardware to send this stuff to the CPU. – Colonel Thirty Two May 09 '14 at 11:56
  • I beg to disagree: [Get results of GPU calculations back to the CPU program in OpenGL](http://stackoverflow.com/questions/14086926/get-results-of-gpu-calculations-back-to-the-cpu-program-in-opengl) – oparisy May 09 '14 at 18:22
  • There's a big difference between writing one vertex/fragment per invocation and sending back potentially multiple strings. – Colonel Thirty Two May 09 '14 at 21:27
  • 1
    It is technically possible to generate interrupts based on GPU execution, sparse textures require them. However, shader languages do not expose this capability. Some vendor-specific applications like nSight from NV are based on this, they will let you insert breakpoints into GLSL code and debug them interactively on the GPU. Microsoft also has this capability for HLSL, but it is implemented by evaluating the shaders in software and extremely slow. Nevertheless, no tool currently in existence is going to let you do a trace log or anything like that, the best you can really do is breakpoints. – Andon M. Coleman May 10 '14 at 01:32
  • @AndonM.Coleman Thanks, I suspected something along those lines. I understand that breakpoints can be expressed as a kind of synchronous event, and the kind of pressure they put on the GPU. But my original question was more about logging expressions values ("watching"), for which asynchronous events would suffice. Perhaps the idea of events is a red herring, and I should investigate in the direction of GPU -> CPU transfers? One by rendered frame would be enough for debugging purpose... – oparisy May 10 '14 at 06:54
  • @AndonM.Coleman May I ask you to rephrase your comment as an answer so that I could accept it? Your position on my last comment (asynchronous events / end of drawing transfers instead of synchronous events / breakpoints) would be great, too. – oparisy May 10 '14 at 06:56