0

I am getting an entry in my slow query log (see code) which is being called from a nodejs application. However if I enter the same query at the mysql command line it is not slow at all and uses indexes.

Any ideas why this should occur and how I can make sure that my nodejs application does not generate slow queries?

The application is on another server but in the same location so the query is going via the network and not directly on the same server. However I don't think that should affect the query speed.. Unless the slow log is also taking into account transfer time. Does anyone know if this is the case?

Slow Log

# Time: 190726  9:49:02
# User@Host: root[root] @ elaine_master [192.168.202.146]
# Thread_id: 18  Schema: cfc  QC_hit: No
# Query_time: 1.552237  Lock_time: 0.000030  Rows_sent: 874  Rows_examined: 1694
SET timestamp=1564134542;
select * from applications where  status !='deleted'
                              and shotgun_status = ''
                              and  assignedto > 0
                              and lo_assistant = 0;

Command line

-------------------------+
874 rows in set (0.05 sec)

Explain

MariaDB [cfc]> explain select * from applications where  status !='deleted' and shotgun_status = '' and  assignedto > 0 and lo_assistant = 0;
+------+-------------+--------------+------+------------------------------------------------------------------------------------------------------------------+-------------------------------+---------+-------+------+-------------+
| id   | select_type | table        | type | possible_keys                                                                                                    | key                           | key_len | ref   | rows | Extra       |
+------+-------------+--------------+------+------------------------------------------------------------------------------------------------------------------+-------------------------------+---------+-------+------+-------------+
|    1 | SIMPLE      | applications | ref  | applications_idx_status,applications_idx_shotgunstatus,applications_idx_assignedto,applications_idx_lo_assistant | applications_idx_lo_assistant | 4       | const | 1693 | Using where |
+------+-------------+--------------+------+------------------------------------------------------------------------------------------------------------------+-------------------------------+---------+-------+------+-------------+
1 row in set (0.00 sec)

Indexes

  PRIMARY KEY (`id`),
  KEY `applications_idx_status` (`status`),
  KEY `applications_idx_leaders` (`leaders`),
  KEY `applications_idx_shotgunstatus` (`shotgun_status`),
  KEY `applications_idx_assignedto` (`assignedto`),
  KEY `applications_idx_lo_assistant` (`lo_assistant`)
) ENGINE=InnoDB AUTO_INCREMENT=2038 DEFAULT CHARSET=latin1 |
Rick James
  • 135,179
  • 13
  • 127
  • 222
  • 1
    I do not think the slow log takes into account the network speed - the documentation explicitly says `execution time`. MySQL (unlike PostgreSQL) can not combine multiple indexes for the same table in the same query. Perhaps you can speed up the query if you create an index over (status,shotgun_status,assignedto,lo_assistant) ? – IVO GELOV Jul 26 '19 at 10:46
  • 1
    Execution time includes disk i/o, which can be a bottleneck. Maybe the data used wasn't in the cache the first time, but is now (making the second execution faster), and/or you had a lot of other (also usually i/o) activity demanding more resources than available because of other queries. In the latter case, you typically might see other slow log entries around that time that also perform well when run isolated, so maybe check for that. – Solarflare Jul 26 '19 at 11:13
  • ok, I made an index with all the columns from the where clause and it seems not to choose it when I look at an "explain" It seems to be consistently around 2 - 2.2 secs all day in the slow log when called from the nodejs app so I don't think it is that it is not cached (or I mean can't be cached) Thanks so far for the suggestions but I don;t think we have nailed it yet. – Jonathan Carter Jul 26 '19 at 14:49
  • You have to check/exclude several cases: a) Your app might run this query at the same time as it runs other queries. So an isolated test of just that one query is not comparable. b) While occuring often during the day, it does not occur *every time* (e.g. just when > 200 users are using your website). An isolated test at the "wrong time" is not comparable. c) The query acually *is* slow, but the test reads from query cache. Then the test is misleading. Please run your test with the [sql_no_case](https://stackoverflow.com/q/181894)-modifier. – Solarflare Jul 27 '19 at 11:27

1 Answers1

0

Please learn about "composite indexes". That query would benefit from a 3-column index starting with shotgun_status and lo_assistant in either order, then either of status or assignedto. Note that I suggest putting the = columns first then one other column. Once it gets to a 'range', it can't use any further columns. More disucssion: http://mysql.rjweb.org/doc.php/index_cookbook_mysql

Unfortunately, such an index won't speed up the query by perhaps more than a factor of two. This is because you need 874 rows and they are probably scattered around on disk:

Rows_sent: 874  Rows_examined: 1694

Also, do you really need all columns? If not, change from SELECT *. This will cut back on the network traffic, thereby helping speed.

874 is a lot of rows to munch on in a UI. What are you doing with that many?

Rick James
  • 135,179
  • 13
  • 127
  • 222