2

I am trying to figure out how best to track & trend end to end performance between releases. By end to end I mean, what is the experience form the client visiting this app via a browser. This includes download time, dom rendering, javascript rendering, etc.

Currently I am running load tests using Jmeter, which is great to prove application and database capacity. Unfortunately, Jmeter will never allow me to show a full picture of the user experience. Jmeter is not a browser therefore will never simulate javascript and dom rendering impact. IE: if time to first byte is 100ms, but it takes the browser 10 seconds to download assets and render the dom, we have problems.

I need a tool to help me with this. My initial idea is to leverage Selenium. It could run a set of tests (login, view this, create that) and somehow record timings for each. We would need to run the same scenario multiple times and likely through a set of browsers. This would be done before every release and would allow me to identify changes in the experience to the user.

For example, this is what I would like to generate:

action      |  v1.5  |  v1.6   |  v1.7
----------------------------------------
login       |  2.3s  |  3.1s  |   1.2s
create user |  2.9s  |  2.7s  |   1.5s

The problem with selenium is that 1. I am not sure if it is designed for this and 2. it appears that DOM ready or javascript rendering is realllly hard to detect.

Is this the right path? Does anyone have any pointers? Are there tools out there that I could leverage for this?

Roeland
  • 3,698
  • 7
  • 46
  • 62

2 Answers2

3

I think you have good goals, but I would split them:

  • Measuring DOM rendering, javascript rendering etc. are not really part of "experience from the client visiting this app via a browser", because your clients are usually unaware that you are "rendering dom" or "running javasript" - and they don't care. But they are something I'd want to address after every committed change, not just release to release, because it could be hard to trace degradation back to particular change if such test is not running all the time. So I would put it in continuous integration on build level. See a good discussion here

  • Then you probably would want to know if server side performance is the same or worsened (or is better). For that JMeter is ideal. Such testing could be done on some schedule (e.g. nightly or on each release) and can be automated using for example JMeter plug-in for Jenkins. If server side performance got worse, you don't really need end-to-end testing, since you already know what will happen.

  • But if server is doing well, then "end user experience" test using a real browser has a real value, so Selenium actually fits well to do this, and since it can be integrated with any of the testing frameworks (junit, nunit, etc), it also fits into automated process, and can generate some report, including duration (JUnit for instance has a TestWatcher which allows you to add consistent duration measurement to every test).

  • After all this automation, I would also do a "real end user experience" test, while JMeter performance test is running at the same time against the same server: get a real person to experience the app while it's under load. Because people, unlike automation, are unpredictable, which is good for finding bugs.

Community
  • 1
  • 1
timbre timbre
  • 12,648
  • 10
  • 46
  • 77
2
  1. Regarding "JMeter is not a browser". It is really not a browser, but it may act like a browser given proper configuration, so make sure you:

    • add HTTP Cookie Manager to your Test Plan to represent browser cookies and deal with cookie-based authentication
    • add HTTP Header Manager to send the appropriate headers
    • configure HTTP Request samplers via HTTP Request Defaults to

      • Retrieve all embedded resources
      • Use thread pool of around 5 concurrent threads to do it
    • Add HTTP Cache Manager to represent browser cache (i.e. embedded resources retrieved only once per virtual user per iteration)
    • if your application is build on AJAX - you need to mimic AJAX requests with JMeter as well

    Regarding "rendering", for example you detect that your application renders slowly on a certain browser and there is nothing you can do by tuning the application. What's next? You will be developing a patch or raising an issue to browser developers? I would recommend focus on areas you can control, and rendering DOM by a browser is not something you can.

  2. If you still need these client-side metrics for any reason you can consider using WebDriver Sampler along with main JMeter load test so real browser metrics can also be added to the final report. You can even use Navigation API to collect the exact timings and add them to the load test report

    WebDriver Timings

    See Using Selenium with JMeter's WebDriver Sampler to get started.

  3. There are multiple options for tracking your application performance between builds (and JMeter tests executions), i.e.

Dmitri T
  • 159,985
  • 5
  • 83
  • 133