4

I have a game written with SpriteKit which uses a SKEffectNode with blur effect to blur a set of sprites, one of which has a fairly large texture, and which together cover a fairly large area of the screen. An iMac and Mac Book Pro cope quite happily with this but on a more humble Mac Book there is a notable drop in frame rate with the effect node added in. Since the effect isn't crucial to the functionality of the game, I could simply not add the SKEffectNode for machines with less powerful graphics capabilities.

So then the question: what would be a good programmatic check that I could make to determine the "power of the GPU" or "performance when applying texture effects" or [suggest better metric here] and via what API? Thanks for your suggestions!

Neil Coffey
  • 21,615
  • 7
  • 62
  • 83
  • 1
    Sorry for the weird questions: Is this effect static, in the sense that you're not animating the amount of the blur from say... 10 units to 100 units, ie always at eg. 100 units of blur? – Confused Jan 14 '17 at 12:57
  • 1
    Q2. Does the texture that's to be blurred, does it change, or always the same texture? – Confused Jan 14 '17 at 12:58
  • 1
    Question Reasoning: If the blur amount is static, and the texture can be predicted, you can do one significant thing to optimise performance: Pre-render the blur, and store it as a texture to apply as a texture when you need the blurred effect. If the texture changes, but the amount of blur is constant, you can do the blur whenever you need, and then set rasterisation to true: https://developer.apple.com/reference/spritekit/skeffectnode/1459381-shouldrasterize – Confused Jan 14 '17 at 13:02
  • 1
    Personally I would try to optmize code if possible, like Confused suggested, but if that is not an option, see if this helps : http://stackoverflow.com/a/7377952/3402095 – Whirlwind Jan 14 '17 at 14:37
  • Thanks will look at the latter post. Having the blur baked in isn't possible unfortunately (sorry I didn't quite explain properly originally -- it's not *just* one sprite with a static texture but a number of sprites on top of one another with the effect node applied across them all). – Neil Coffey Jan 14 '17 at 16:59

1 Answers1

6

You'll have to create a performance test using your actual blurring processes and some sample content to get an accurate idea of the time cost of it on each generation of hardware.

Blurs are really weird things, programmatically. A Box Blur can give you most of the appearance of a nice, soft gaussian blur for much less processing cost. A zoom or motion blur (that looks good) is surprisingly expensive, even on strong hardware.

And there's some amazingly effective "cheats" when doing blurs. Because there's no need for detail you can heavily optimise the operations, particularly if the blurs are strong.

Apple, it's believed, does something like this, for example, with its blurs:

  1. Massively shrink the target image
  2. Do a gaussian blur on this tiny image
  3. Scale it back up, somewhat
  4. Apply a cheap Box Blur to soften it
  5. Fully scale back to the desired size

By way of terrible example benefitting from scaling well (with filtering set for good scaling)

This is the full sized image blurred:

enter image description here

And here's a version of the same image, scaled to a 16th of its original size, blurred, and then the blurred image scaled back up. As you can see, due to the good scaling and lack of detail, there's hardly any difference in the blurred image, but the blur takes MUCH less processing energy and time:

enter image description here

Confused
  • 6,048
  • 6
  • 34
  • 75