0

I just started using PHP profiler to find files in php script which caused slow mysql queries. Some suggested that I use xdebug to track it with the timestamp from the slow log-file entry compared to the files in php profiler that was executed at the same timestamp.

I have read the xdebug documentation but can't find an explanation of this problem.

Can anyone give me enlightenment?

I'm using php 7.0, Debian 9.

My slow-query-log-file entries:

# Thread_id: 222244  Schema: user  QC_hit: No
# Query_time: 51.019708  Lock_time: 0.000119  Rows_sent: 1  Rows_examined: 13295012
# Rows_affected: 0
SET timestamp=1559388099;
SELECT (COUNT(*)) AS `count` 
FROM statistics Statistics WHERE (id >= 1 AND ad_type <> 3);

Edit:

It's not about counting rows in a SELECT statement, but it's about how to track application files that cause slow requests to occur.

Drunken M
  • 2,013
  • 2
  • 11
  • 18
  • Possible duplicate of [MySQL: Fastest way to count number of rows](https://stackoverflow.com/questions/5060366/mysql-fastest-way-to-count-number-of-rows) –  Jun 02 '19 at 05:44
  • There's alot of information that suggests `SQL_CALC_FOUND_ROWS` is just as inefficient as `COUNT()` – zanderwar Jun 02 '19 at 05:47
  • No form of profiling will help. – Rick James Jun 02 '19 at 21:11

1 Answers1

0

The most efficient way for counting large tables is that you should be storing the count in a table somewhere else and increasing/decreasing that value when required, that way you're only querying a single cell and a 51second query becomes less than 1 second.

I know it feels like a redundant thing to do but it is the most efficient and optimal way

There are topics around that suggest querying the INFORMATION_SCHEMA but that doesn't help at all given your need for WHERE and everything else is just as inefficient as your issue here.

All you need is the current count, a place to store it, functionality to increase/decrease it and you're good to go

zanderwar
  • 3,440
  • 3
  • 28
  • 46