We have a web application backed by MySQL serving hundreds of queries per second. I'm looking for a way to measure the "cost" of every query in production. I'm imagining some option where, for every query, MySQL returns the query results along with the CPU and I/O cost of executing that query.
The end goal is to aggregate those costs by endpoint (e.g. "/search") and by the logged-in user ID. That way, when we're having issues with site, we can quickly see if there's a particular action or user ID that is using up a large chunk of our MySQL resources.
Close but not quite (AFAICT):
This answer comes close: https://stackoverflow.com/a/12880997/163832
It describes the precision and accuracy problems with EXPLAIN
and recommends an alternative that measures what actually happened rather than estimating what will happen.
The alternative does seem better for my use case, but there are still problems:
- I looked at the available stats and can't find ones that measure CPU or I/O.
- I don't think I can afford to do
FLUSH STATUS
and thenSHOW SESSION STATUS ...
on every query. - This doesn't work when many queries are running concurrently.