31

I have a largish but narrow InnoDB table with ~9m records. Doing count(*) or count(id) on the table is extremely slow (6+ seconds):

DROP TABLE IF EXISTS `perf2`;

CREATE TABLE `perf2` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `channel_id` int(11) DEFAULT NULL,
  `timestamp` bigint(20) NOT NULL,
  `value` double NOT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `ts_uniq` (`channel_id`,`timestamp`),
  KEY `IDX_CHANNEL_ID` (`channel_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

RESET QUERY CACHE;
SELECT COUNT(*) FROM perf2;

While the statement is not run too often it would be nice to optimize it. According to http://www.cloudspace.com/blog/2009/08/06/fast-mysql-innodb-count-really-fast/ this should be possible by forcing InnoDB to use an index:

SELECT COUNT(id) FROM perf2 USE INDEX (PRIMARY);

The explain plan seems fine:

id  select_type table   type    possible_keys   key     key_len ref     rows    Extra
1   SIMPLE      perf2   index   NULL            PRIMARY 4       NULL    8906459 Using index

Unfortunately the statement is as slow as before. According to "SELECT COUNT(*)" is slow, even with where clause I've also tried optimizing the table without success.

What/is the/re a way to optimize COUNT(*) performance on InnoDB?

Community
  • 1
  • 1
andig
  • 13,378
  • 13
  • 61
  • 98

3 Answers3

22

As of MySQL 5.1.6 you can use the Event Scheduler and insert the count to a stats table regularly.

First create a table to hold the count:

CREATE TABLE stats (
`key` varchar(50) NOT NULL PRIMARY KEY,
`value` varchar(100) NOT NULL);

Then create an event to update the table:

CREATE EVENT update_stats
ON SCHEDULE
  EVERY 5 MINUTE
DO
  INSERT INTO stats (`key`, `value`)
  VALUES ('data_count', (select count(id) from data))
  ON DUPLICATE KEY UPDATE value=VALUES(value);

It's not perfect but it offers a self contained solution (no cronjob or queue) that can be easily tailored to run as often as the required freshness of the count.

Che
  • 1,691
  • 1
  • 15
  • 8
20

For the time being I've solved the problem by using this approximation:

EXPLAIN SELECT COUNT(id) FROM data USE INDEX (PRIMARY)

The approximate number of rows can be read from the rows column of the explain plan when using InnoDB as shown above. When using MyISAM this will remain EMPTY as the table reference isbeing optimized away- so if empty fallback to traditional SELECT COUNT instead.

andig
  • 13,378
  • 13
  • 61
  • 98
  • 22
    Remember that the "approximation" here is far from accurate, and for a table with 1M rows might return anywhere between 100K and 10M. `SHOW TABLE STATUS` is similar. It's really unreliable. I think a lot of the performance problems are fixed in MySQL 5.7, though. – tadman Oct 09 '13 at 18:49
  • 8
    This approximation is so incredibly unreliable that I can't think of a situation where I'd use it. For a large table, I would rather execute the full `COUNT(*)` periodically and use that value until the next count. – Max May 02 '14 at 11:15
  • 2
    The `Auto_increment` field within `SHOW TABLE STATUS` is much more accurate for a count of the whole table, with much greater speed. – Sam Doidge Oct 25 '16 at 11:47
  • 1
    @SamDoidge be careful if there have been records deleted from the table then Auto_increment will be an over-estimation. – Rockstar04 Aug 22 '17 at 18:57
17

Based on @Che code, you can also use triggers on INSERT and on UPDATE to perf2 in order to keep the value in stats table up to date in realtime.

CREATE TABLE stats (
 `key`   varchar(50)  NOT NULL PRIMARY KEY,
 `value` varchar(100) NOT NULL
);

Then:

CREATE TRIGGER `count_up` AFTER INSERT   ON `perf2` FOR EACH ROW UPDATE `stats`
SET   `stats`.`value` = `stats`.`value` + 1 
WHERE `stats`.`key` = 'perf2_count';

CREATE TRIGGER `count_down` AFTER DELETE ON `perf2` FOR EACH ROW UPDATE `stats`
SET   `stats`.`value` = `stats`.`value` - 1 
WHERE `stats`.`key` = 'perf2_count';

So the number of rows in the perf2 table can be read using this query, in realtime:

SELECT `value` FROM `stats` WHERE `key` = 'perf2_count';

This would have the advantage of eliminating the performance issue of performing a COUNT(*) and would only be executed when data changes in perf2.

Valerio Bozz
  • 1,176
  • 16
  • 32
MQuirion
  • 323
  • 2
  • 5
  • This answer is pretty interesting, efficient and real-time. If you think this answer is confusing, is just because it's the only one in this page using apexes in the right way. – Valerio Bozz Feb 04 '22 at 11:40
  • @ValerioBozz this is actually not efficient, as triggers cause insert/deletes to take much longer compared to no triggers. Especially for large table inserts. – John C Dec 21 '22 at 20:54
  • @JohnC If you have large set of table inserts I think that a START TRANSACTION; ....; COMMIT can help. I think that it's easy to say that running a SELECT COUNT(*) on 9 million records is surely not efficient, compared to this solution. Feel free to post another even-more-efficient solution. – Valerio Bozz Dec 22 '22 at 13:46
  • @ValerioBozz starting a transaction will not change anything with the trigger, triggers still execute as normal within a transaction. We do select count on tables with 156 million records all the time. If the correct index is created, it takes milliseconds. This is the worse way to keep track of table count. Try to avoid triggers at all costs, they take up too much resources. – John C Mar 16 '23 at 07:10
  • Hi @JohnC - I'm trying to understand how this answer can be improved, but I don't get how an index can help in doing a COUNT(*) to obtain the number "156000000" in a table of 156 million records, efficiently (so, without a full table scan). Please expand your comment. Thank you for this clarification. Without this info, a trigger is still a decent and scalable idea. Better than an FTS. – Valerio Bozz Mar 16 '23 at 12:34