26

For a reaction time study (see also this question if you're interested) we want to control and measure the display time of images. We'd like to account for the time needed to repaint on different users' machines.

Edit: Originally, I used only inline execution for timing, and thought I couldn't trust it to accurately measure how long the picture was visible on the user's screen though, because painting takes some time.

Later, I found the event "MozAfterPaint". It needs a configuration change to run on users' computers and the corresponding WebkitAfterPaint didn't make it. This means I can't use it on users' computers, but I used it for my own testing. I pasted the relevant code snippets and the results from my tests below.
I also manually checked results with SpeedTracer in Chrome.

// from the loop pre-rendering images for faster display
var imgdiv = $('<div class="trial_images" id="trial_images_'+i+'" style="display:none"><img class="top" src="' + toppath + '"><br><img class="bottom" src="'+ botpath + '"></div>');
Session.imgs[i] = imgdiv.append(botimg);
$('#trial').append(Session.imgs);

// in Trial.showImages
$(window).one('MozAfterPaint', function () {
    Trial.FixationHidden = performance.now();
});
$('#trial_images_'+Trial.current).show(); // this would cause reflows, but I've since changed it to use the visibility property and absolutely positioned images, to minimise reflows
Trial.ImagesShown = performance.now();

Session.waitForNextStep = setTimeout(Trial.showProbe, 500); // 500ms    

// in Trial.showProbe
$(window).one('MozAfterPaint', function () {
    Trial.ImagesHidden = performance.now();
});
$('#trial_images_'+Trial.current).hide();
Trial.ProbeShown = performance.now();
// show Probe etc...

Results from comparing the durations measured using MozAfterPaint and inline execution.

This doesn't make me too happy. First, the median display duration is about 30ms shorter than I'd like. Second, the variance using MozAfterPaint is pretty large (and bigger than for inline execution), so I can't simply adjust it by increasing the setTimeout by 30ms. Third, this is on my fairly fast computer, results for other computers might be worse.

Boxplot of durations

Relationship of durations measured using two methods

Results from SpeedTracer

These were better. The time an image was visible was usually within 4 (sometimes) 10 ms of the intended duration. It also looked like Chrome accounted for the time needed to repaint in the setTimeout call (so there was a 504ms difference between the call, if the image needed to repaint). Unfortunately, I wasn't able to analyse and plot results for many trials in SpeedTracer, because it only logs to console. I'm not sure whether the discrepancy between SpeedTracer and MozAfterPaint reflects differences in the two browsers or something that is lacking in my usage of MozAfterPaint (I'm fairly sure I interpreted the SpeedTracer output correctly).

Questions

I'd like to know

  1. How can I measure the time it was actually visible on the user's machine or at least get comparable numbers for a set of different browsers on different testing computers (Chrome, Firefox, Safari)?
  2. Can I offset the rendering & painting time to arrive at 500ms of actual visibility? If I have to rely on a universal offset, that would be worse, but still better than showing the images for such a short duration that the users don't see them consciously on somewhat slow computers.
  3. We use setTimeout. I know about requestAnimationFrame but it doesn't seem like we could obtain any benefits from using it:
    The study is supposed to be in focus for the entire duration of the study and it's more important that we get a +/-500ms display than a certain number of fps. Is my understanding correct?

Obviously, Javascript is not ideal for this, but it's the least bad for our purposes (the study has to run online on users' own computers, asking them to install something would scare some off, Java isn't bundled in Mac OS X browsers anymore).
We're allowing only current versions of Safari, Chrome, Firefox and maybe MSIE (feature detection for performance.now and fullscreen API, I haven't checked how MSIE does yet) at the moment.

sideshowbarker
  • 81,827
  • 26
  • 193
  • 197
Ruben
  • 3,452
  • 31
  • 47
  • Since the browser will have to repaint regardless of how the image is hidden/shown, it's really just a "least terrible" scenario, I think. That is, any change to the browser window will incur a repaint (although possibly only to certain areas - like where the image is). Every browser will do this differently, in addition to every computer. I think your accepted variance on display time may need to be expanded to use an html/css/js solution. – Jordan Kasper Jan 18 '13 at 17:48
  • @jak just getting a good estimate of the variance would be nice. Especially on the user side but during pretesting would also help. – Ruben Jan 18 '13 at 19:31
  • every trial should have a well defined tolerance level. It seems to me that this inquiry ignores the principles of significant numbers (the idea that a certain point becomes meaningless due to number of factors). For example how do you account for the variance in mouse drivers that may cause signal lag? I think you are putting more effort into this than you can reliably use. – patrickgamer Jan 20 '13 at 05:29
  • @patrick Do you mean [these principles](http://slc.umd.umich.edu/slconline/SIGF/page8.html) and what exactly do you mean? I can't account for everything and I'm happy with that. I want to do my best to account for what I can, though. If it was just about measurement biases, those would average out. But if the image is displayed for a too short time to some users, they simply won't get the treatment at all, I'd like to avoid that as much as possible. – Ruben Jan 20 '13 at 16:17
  • @patrick The idea with the tolerance level is nice. A [deleted answer linked to a blog post where they describe doing this](http://www.headlondon.com/our-thoughts/technology/posts/the-accuracy-of-javascript-timing) to solve a different problem, that doesn't exist with `performance.now` anymore (nonmonotonic time, system clock polling problems). My tolerance level would be about whether painting takes too long. But how do I find out whether my tolerance level was violated? This boils down to **1**, right? – Ruben Jan 20 '13 at 16:19
  • @Ruben My only point is that if you spend X hours to get a 00.12% improvement to tracking, but your maximum reliable accuracy is only 1%, you're basically wasting your time. I think pre-caching your images and having everything staged to load will probably give you everything you need with minimal loss of human interaction timing. – patrickgamer Jan 20 '13 at 20:32
  • You can figure out your tolerances by doing some reasarch on human eye recognition time (like http://link.springer.com/article/10.1007%2FBF00355600?LI=true) then look at the paint times, and see if they make any difference. I.e. if human response time is measured in 10's of miliseconds and painting is measured in miliseconds, then the impact painting has on your test is inconsequential. – patrickgamer Jan 20 '13 at 20:36
  • @patrick I'd like to figure out the variance in presentation time on a couple of configurations (optimally on every one of the users' computers, but I doubt that'll work). I am trying to find out whether my estimates are off, how I can get better ones, whether and how I can reduce the disceprancies between intended and actual presentation time. I wouldn't go through all that effort if I thought the discrepancies on the order of 30ms and possibly more that I presented above were inconsequential. – Ruben Jan 20 '13 at 22:55
  • Did you try [benchmark.js](http://benchmarkjs.com/), it uses some of the same techniques as jsperf and other "testing sites", often exposing a java applet if available to get nanoseconds etc. – adeneo Jan 26 '13 at 18:41
  • @adeneo I've "seen it around", but didn't really see any advantage for this "paint time" problem in it. Can you elaborate or should I dig deeper into its code? – Ruben Jan 27 '13 at 14:00
  • You're one unlucky guy. I think you're trying to do the undoable. You either run into problems because JavaScript execution is seperated from the actual rendering of the content, or because the methodology you use is asynchronous. I doubt you'll be able to get much further than what you've already done, but I think you have done a lot. Thank you for an intelligent question and answer. Good luck in your endeavour! – RandomSort Jan 28 '13 at 01:01

2 Answers2

5

Because I didn't get any more answers yet, but learnt a lot while editing this question, I'm posting my progress so far as an answer. As you'll see it's still not optimal and I'll gladly award the bounty to anyone who improves on it.

Statistics

New results

  • In the leftmost panel you can see the distribution that led me to doubt the time estimates I was getting.
  • The middle panel shows what I achieved after caching selectors, re-ordering some calls, using some more chaining, minimising reflows by using visibility and absolute positioning instead of display.
  • The rightmost panel shows what I got after using an adapted function by Joe Lambert using requestAnimationFrame. I did that after reading a blogpost about rAF now having sub-millisecond precision too. I thought it would only help me to smooth animations, but apparently it helps with getting better actual display durations as well.

Results

In the final panel the mean for the "paint-to-paint" timing is ~500ms, the mean for inline execution timing scatters realistically (makes sense, because I use the same timestamp to terminate the inner loop below) and correlates with "paint-to-paint" timing.

There is still a good bit of variance in the durations and I'd love to reduce it further, but it's definitely progress. I'll have to test it on some slower and some Windows computers to see if I'm really happy with it, originally I'd hoped to get all deviations below 10ms.

I could also collect way more data if I made a test suite that does not require user interaction, but I wanted to do it in our actual application to get realistic estimates.

window.requestTimeout using window.requestAnimationFrame

window.requestTimeout = function(fn, delay) {
    var start = performance.now(),
        handle = new Object();
    function loop(){
        var current = performance.now(),
            delta = current - start;

        delta >= delay ? fn.call() : handle.value = window.requestAnimationFrame(loop);
    };
    handle.value = window.requestAnimationFrame(loop);
    return handle;
};

Edit:

An answer to another question of mine links to a good new article.

Community
  • 1
  • 1
Ruben
  • 3,452
  • 31
  • 47
4

Did you try getting the initial milliseconds, and after the event is fired, calculate the difference? instead of setTimeout. something like:

var startDate = new Date();
var startMilliseconds = startDate.getTime();

// when the event is fired :
(...), function() {
    console.log(new Date().getTime() - startMilliseconds);
});

try avoiding the use of jQuery if possible. plain JS will give you better response times and better overall performance

canolucas
  • 1,482
  • 1
  • 15
  • 32
  • Yes, I did that too. It's in my code snippet and it's where the right half of the boxplot chart comes from. I just used `performance.now` instead of `new Date().getTime()` because it's more accurate (see my other question, linked above). I don't think my usage of jQuery will measurably impede performance, are you sure? I don't use any functions that mask extremely complex behaviour, as far as I can tell, and the JS execution is never > 0.x ms in my SpeedTracer output. The paint events seemed to be something where I could improve. – Ruben Jan 20 '13 at 16:09
  • depends greately on selector context and the DOM tree size. try taking a look at this site for further improvement: http://net.tutsplus.com/tutorials/javascript-ajax/10-ways-to-instantly-increase-your-jquery-performance/ cheers – canolucas Jan 20 '13 at 16:16
  • although i would make a small code snippet without jQuery and see if it makes any difference with a jQuery analog. i would really give it a try. maybe there is the bottleneck time, when the event is being fired. – canolucas Jan 20 '13 at 16:24
  • Thanks for the link, I'll compare. But it definitely won't help me improve or measure paint times as given by SpeedTracer. – Ruben Jan 20 '13 at 20:04