7

I have a 2D HTML5 game engine (www.scirra.com) and really want to detect if WebGL is going to render with Chrome 18's 'Swiftshader' software renderer. If so we would much prefer to fall back to the ordinary canvas 2D context, as happens in other browsers. The mass of people out there have low end machines with weak CPUs that turn the game in to a slideshow when software rendering, and I think in many cases the 2D canvas would have been hardware accelerated. However, the WebGL context creation never fails in Chrome and there is no obvious way to detect SwiftShader.

Things I've tried:

// Always returns "WebKit WebGL" regardless of SwiftShader
gl.getParameter(gl.RENDERER)

// Always returns "WebKit" regardless of SwiftShader
gl.getParameter(gl.VENDOR)

I could try taking in to account things like the maximum texture size or the other MAX_* properties, but how do I know they don't vary between machines even with SwiftShader? And since I guess SwiftShader aims to mimic common hardware, using that approach might still get a lot of false positives.

I don't want to write a startup performance test, because:

  • we just make an engine, not any particular game, so I don't know how we'd write a fair test which works in the general case for any game of any performance profile with a high degree of accuracy
  • A good test would probably need a second or two to finish running, which could interrupt the user experience or make them have to watch some squares being shifted around or whatever
  • It could create new complications, such as if we cache the result, what if the user updates their drivers and fixes the problem?

I don't want to flat out disable WebGL on Chrome, because with hardware-accelerated WebGL performance can be over twice as fast as canvas 2D! If we did that, everyone loses.

I don't want to have to add in-game switches or a user setting, because how many users care about that? If the game is slow they'll just quit and most likely not search for a solution. "This game sucks, I'll go somewhere else." I think only a minority of users would bother reading instructions like "by the way, if this game is slow, try changing this setting to 'canvas 2D'..."

My current best guess is to use gl.getSupportedExtensions(). I have found that SwiftShader reports the following extensions:

OES_texture_float,OES_standard_derivatives,WEBKIT_WEBGL_lose_context

...but a real hardware-accelerated context reports:

OES_texture_float,OES_standard_derivatives,WEBKIT_WEBGL_lose_context,WEBKIT_WEBGL_compressed_textures

Note the addition of WEBKIT_WEBGL_compressed_textures. Some quick research indicates that this may or may not be widely supported. See this support table - both GL_EXT_texture_compression_s3tc and GL_ARB_texture_compression appear widely supported on desktop cards. Also the table only seems to list reasonably old models, so I could hazard a guess that all modern desktop graphics cards would support WEBKIT_WEBGL_compressed_textures... therefore my detection criteria for SwiftShader would be:

  • Windows OS
  • Google Chrome browser
  • WebGL context does not support WEBKIT_WEBGL_compressed_textures
  • Result: fall back to Canvas 2D

Of course, if SwiftShader adds compressed texture support in future, this breaks again. But I can't see the advantage of compressed textures with a software renderer! Also, it will still get lots of false positives if there are many real working video cards out there that don't support WEBKIT_WEBGL_compressed_textures!

Is there not a better way to detect SwiftShader?

AshleysBrain
  • 22,335
  • 15
  • 88
  • 124
  • Remember that the WEBKIT_WEBGL_compressed_textures extension is available on windows because: 1) it's an experimental hardware feature; 2) Hardware acceleration on Chrome/Windows is done through the ANGLE library, which is an OpenGL ES implementation backed by Direct3D (which has much better hardware support than OpenGL in some Intel boards). So that flag is directly related to Direct3D compressed texture support. – Chiguireitor May 05 '12 at 02:35
  • @Chiguireitor - are you saying that particular extension is common, or a good indicator of real hardware support? That would be reassuring. – AshleysBrain May 05 '12 at 02:43
  • Don't take my word for it, check the project directly http://code.google.com/p/angleproject/ i saw a demo by Toji showing texture compression, which is "almost" a certain sign of hardware acceleration but remember that a => b doesn't necessarily imply b => a... you won't have a compressed_textures extension on non current hardware. – Chiguireitor May 05 '12 at 03:27

4 Answers4

2

Go to http://code.google.com/p/angleproject/wiki/ExtensionSupport look at the EGL extension EGL_ANGLE_software_display, if it is available, it is because there's a SwiftShader backend.

Chiguireitor
  • 3,186
  • 1
  • 20
  • 26
  • The problem is: currently there's no way to verify EGL extensions (at least from javascript, you should check NaCl to see if it's possible from there) – Chiguireitor May 05 '12 at 03:39
  • There's also this spec http://www.khronos.org/registry/webgl/extensions/WEBGL_debug_renderer_info/ but it seems it is in its infancy. I think you shouldn't be counting on this becoming solved soon, but it wouldn't hurt to post an issue in the chromium project, this could be on their flag "Important-for-games" – Chiguireitor May 05 '12 at 03:49
  • Interesting... but the name implies it's only available under special debug circumstances... I'm not sure it will be there all the time. Also, WebGL doesn't expose that EGL extension I'm afraid. – AshleysBrain May 05 '12 at 05:33
2

You say you don't want to write a “startup performance test” — that is, render a few frames using both 2D and WebGL and measure which is faster, then use that one — but I still think it is the best option.

Advantages:

  • Selects the renderer that is actually faster on the current hardware, regardless of its apparent attributes.

Disadvantages/caveats:

  • You have to load your resources for both renderers, which might increase load time (but you can remember the choice so that it only ever happens once).
  • If the system is slow at first (e.g. due to paging) the measurement could be wrong.
  • Hardware without active cooling (e.g. phones) may overheat and reduce performance at some later time. You can't directly measure the heat output of the two methods. (If you have a good opportunity to re-measure and re-select the rendering method, such as a static screen while loading a new level, you could do that.)

Addressing your specific concerns:

  • Since you are writing a game engine, you will have to provide the game developer with a way to specify sample content to use in the test.
  • A second or two isn't a whole lot of additional load time, especially if (as I suspect) there is no reliable way to discriminate renderers otherwise. There is no need to make the canvas elements you are using visible to the user during the test; you could present an entirely unrelated loading animation.
Kevin Reid
  • 37,492
  • 13
  • 80
  • 108
  • 1
    "we just make an engine, not any particular game, so I don't know how we'd write a fair test which works in the general case for any game of any performance profile with a high degree of accuracy" – AshleysBrain May 05 '12 at 23:48
  • @AshleysBrain Sorry, I missed that you already mentioned that option. I've responded to your bullet points. – Kevin Reid May 05 '12 at 23:59
  • We have a large engine where it's not uncommon to have around a megabyte of javascript. This means the first few seconds are often a bit stuttery due to parsing and JIT compiling the script, which may also not be done until the script actually starts running, i.e. perfectly timed with when we start running a performance test. So I anticipate that the performance test will routinely run with other processing-intensive operations, ruining the accuracy of the results. Can you think of a way around this? – AshleysBrain May 06 '12 at 00:52
1

What you really want to know is if it would be better to present your game in Canvas2D instead of WebGL. That's not the same question as whether or not it's running on top of Swiftshader.

Honestly, I don't know why asking the user is unacceptable.

Many of the top selling games of all time have these options including

  • Call of Duty Modern Warfare 3
  • Battlefield 3
  • Angry Birds (http://chrome.angrybirds.com/)
gman
  • 100,619
  • 31
  • 269
  • 393
  • I can't see a WebGL vs. Canvas2D option in Angry Birds that you linked to, just a SD/HD option to switch screen size. Both your other examples are hardcore desktop games which you'd expect to have a tonne of options, but casual games generally have drive-by users who don't care so much, so it's better not to have many options at all for them, especially not obscure technical options. Also, what I *REALLY* want is to know which is hardware accelerated, but in practice the only real problem is Swiftshader. – AshleysBrain May 05 '12 at 00:35
  • HD/SD in Angry Birds IS WebGL vs Canvas2D And no, you don't want to know if it's hardware accelerated. You wan t to know if it's fast or slow. If software was faster than some crappy hardware why would you choose the hardware? – gman May 05 '12 at 01:59
  • Because the hardware renderer will save battery and leave CPU free for other apps, even if it's slower. Plus there are a huge number of users out there with low-end machines which have terrible, unplayable software-rendered performance, but playable when using hardware acceleration. That is a showstopper problem. The number of users with CPUs powerful enough to out-do their graphics cards is probably much smaller, and in that case whichever is chosen does not make the game totally unplayable. So using software rendering when hardware is available is a *much* more severe problem. – AshleysBrain May 05 '12 at 02:06
1

SwiftShader is actually faster than some integrated graphics. So detecting GPU or CPU rendering gives you no guarantees about actual performance. Also, SwiftShader is the fastest software renderer around and should do a really decent job with simple games. Are you sure your application is properly optimized?

Alex
  • 21
  • 1
  • 1
    Your statement is true, but SwiftShader is much slower than accelerated hardware Canvas 2D on some computers. That's why AshleysBrain is asking this. – Chiguireitor May 05 '12 at 02:37
  • Software rendering: unplayable for some users with weak CPUs. Hardware rendering: I'd go as far as saying it's *never* unplayable, even with weak GPUs. We don't want whichever's fastest, we want whichever's the GPU. It's safer. – AshleysBrain May 05 '12 at 02:41
  • Let me bring one more factor: Full emulation produces more heat, spinning the fans or even overheating and throttling the CPU mid-game. Choosing whichever is at least partially accelerated is safer heat-wise. – user185953 Dec 09 '20 at 18:02