9

One can benchmark regular JavaScript functions by counting how many times he could call those functions in a second. On WebGL, though, functions such as gl.drawArrays are async, so you can't measure the time the shader takes by benchmarking the API call.

Is there any way to benchmark WebGL functions?

MaiaVictor
  • 51,090
  • 44
  • 144
  • 286

2 Answers2

4

It's very difficult to benchmark a shader because there's a ton of context and they are very GPU specific.

You might be able to tell if one shader is faster than another by using performance.now before and after drawing a bunch of stuff with that shader (a few thousand to million draw calls) then stalling the GPU by calling gl.readPixels. It will tell you which is faster. It won't tell you how fast they are since stalling the GPU includes the starting and stalling time.

Think of a race car. For a dragster you time acceleration to dest. For a race car you time one lap going full speed. You let the car go one lap first before timing, you time the 2nd lap, the car crosses the starting line going full speed and the finish line also going full speed. So, you get the car's speed where as for the dragster you get its acceleration (irrelevant to GPUs generally since if you're going for speed you should never start and stop them).

Another way to time without adding in the start/stop time is to draw a bunch between requestAnimationFrame frames. Keep increasing the amount until the time between frames jumps up a whole frame. Then compare the amounts between shaders.

There's other issues though in actual usage. For example a tiled GPU (like PowerVR on many mobile devices) attempts to cull parts of primitives that will be overdrawn. So a heavy shader with lots of overdraw on a non-tiled GPU might be plenty fast on a tiled GPU.

Also make sure you're timing the right thing. If you're timing a vertex shader you probably want to make your canvas 1x1 pixel and you're fragment shader as simple as possible and pass a lot of vertices in one draw call (to remove the call time). If you're timing a fragment shader then you probably want a large canvas and a set of vertices that contains several full canvas quads.

Also see WebGL/OpenGL: comparing the performance

Community
  • 1
  • 1
gman
  • 100,619
  • 31
  • 269
  • 393
3

There's no way to get exact shader execution time without maybe some GPU vendor-specific tools. However, in addition to gman's suggestion there is EXT_disjoint_timer_query extension which allows to measure execution time of your draw call, which in it's turn significantly depends on shader execution time, especially when your shaders are quite heavy (thus taking majority of time GPU spent execution your draw calls).

Kirill Dmitrenko
  • 3,474
  • 23
  • 31
  • `EXT_disjoint_timer_query` won't give you useful info on tiled architectures like PowerVR (iOS) since they use a deferred renderer the best they can do is give you the time for the entire frame after all the polygons have been split, put in tiles, occlusion clipped, tiles all rendered, etc. https://imgtec.com/blog/a-look-at-the-powervr-graphics-architecture-tile-based-rendering/ – gman Aug 08 '16 at 15:50
  • @gman Yes. However, we're talking about **shader** benchmarking. The circumstances give as an opportunity to make life uneasy for TBDR and thus obtain somewhat adequate results. Am I wrong here? Never tested that assumption. – Kirill Dmitrenko Aug 08 '16 at 15:54
  • But can you? I guess as long as you turn on blending you can at least get some idea of if one shader is faster than another. With blending off all your overdraw will be culled. You just have to remember the time you get back includes all the polygon splitting, tile bucketing, etc so it's not just the time of your shader. I guess that also assume the extension even exists on tiled architectures and what it actually returns. – gman Aug 08 '16 at 15:56
  • Checking the [Apple docs](https://developer.apple.com/library/ios/documentation/DeviceInformation/Reference/iOSDeviceCompatibility/OpenGLESPlatforms/OpenGLESPlatforms.html) that extension doesn't currently exist on iOS so I guess that answers that – gman Aug 08 '16 at 16:01
  • I've somewhat successfully used [PowerVR tools](https://community.imgtec.com/developers/powervr/tools/) with Android to inspect overall occupancy of GPU (including draw calls timings and such). Maybe the tools can be used with iOS also. – Kirill Dmitrenko Aug 08 '16 at 16:06