Hits are not always the best measurement of throughput. Here is why: The number of hits can be drastically altered by settings on the servers for applications under test related to the management of cached items. An example: Say you have a very complex page with 100 objects on the page. Only the top level page is dynamic in nature, with the rest of the items as page components such as images, style sheets, fonts, javascript files, etc,.... In a model where no cache settings are in place you will find that all 100 elements have to be requested, each generating a "hit" in both reporting and the server stats. These will show up in your hits/second model.
Now optimize the cache settings where some information is cached at the client for a severely long period ( logo images and fonts for a year ), some for the build interval term of one week (resident in the CDN or client) and only the dynamic top level HTML remains uncached. In this model there is only one hit generated at the server except for the period of time immediately after the build is deployed when the CDN is being seeded for the majority of users. Occasionally you will have a new CDN node come into play with users in a different geographic area, but after the first user seeds the cache the remainder are pulling from CDN and then cached at the client. IN this case your effective hits per second drop tremendously at both CDN and Origin servers, especially where you have return users.
Food for thought.