1

I'm given exact size .png renders from Application Design showing exactly what my app should look like on Retina 4", Retina 3.5", etc.

Would like to automate a comparison between these "golden master" renders and screenshots of what the app actually looks like when that screen is shown.

Ideally I would like to have something I can run via continuous integration so I can break the build if a .xib gets messed up.

How can I do this?


Already tried:

  • Used Command-S in iPhone simulator to grab a screenshot suitable for comparison
  • Used GitHub's excellent image diff interface to manually compare the images
  • Pulled them up side-by-side in Preview.app, in actual size (Command-0)
  • Did some research on ImageMagick's comparison capabilities (examples)

Possible approaches:

  • Getting a screenshot of the app in code is already implemented
  • Similarly, I'm pretty sure I can find code to simulate a tap on the screen
  • Might need some way to exclude a mask or bounding box of areas known to not match exactly
jscs
  • 63,694
  • 13
  • 151
  • 195
funroll
  • 35,925
  • 7
  • 54
  • 59

4 Answers4

2

Take a look at ios-snapshot-test-case, which was built for something close to this.

It will take a reference image the first time a test is run and then compare subsequent test outputs to the reference image. You could essentially use this but instead of creating reference images from the tests, you supply your own reference images.

In practice, this will be extremely tricky to do correctly. There are subtle differences in how text, gradients, etc are rendered between iOS and whatever tool your designers are using.

1

I'd check out KIF for functional testing.

You can create a custom test (small example near the end of the readme just above "Use with other testing frameworks") that takes a screenshot and compares it to your expected screenshot for that view. Just call failWithException:stopTest: if it doesn't match.

As you mentioned, you will want to save a mask with each expected screenshot, and apply the mask before comparing. You will always have parts of the screen that won't match, like the time in the status bar at a minimum.

For the comparison itself, here are a couple links:

Community
  • 1
  • 1
dokkaebi
  • 9,004
  • 3
  • 42
  • 60
  • Thank you for this! Did some browsing through their code. The concept of using the accessibility information is pretty innovative. They also implemented text entry by simulating taps on the on-screen keyboard--impressive. I still need to figure out an approach for the screenshot comparison and mask mechanism. – funroll Mar 25 '14 at 20:47
  • Edited in a couple links that might help get you started on that part. – dokkaebi Mar 25 '14 at 21:08
  • I wish I could accept both answers. This one is very helpful. – funroll Oct 28 '14 at 17:16
1

I know this is an older question, but it's worth pointing out that KIF has built a "Perceptual Difference Testing Framework" called Lela:

https://github.com/kif-framework/Lela

If you're already using KIF this is the way to go. I believe it uses somewhat fuzzy image diffing so it may be able to get around the text rendering issues David Grandinetti mentioned. I haven't tried using it against external comps though.

If you're more comfortable with BDD/Cucumber/Gherkin syntax, you should also check out Zucchini, which uses reference images:

http://zucchiniframework.org/

I haven't used it but it's well spoken of.

Mark Guinn
  • 619
  • 3
  • 8
0

I suggest you take a look at Visual CI

It's a software built for Continuous integration image compare,
It has UI that allows you to control settings which also include which parts of your image to compare

It's kind of new, but may answer your requirements better.

Mike Ruder
  • 23
  • 3