10

What is 'correct' query to fetch a cumulative sum in MySQL?

I've a table where I keep information about files, one column list contains the size of the files in bytes. (the actual files are kept on disk somewhere)

I would like to get the cumulative file size like this:

+------------+---------+--------+----------------+
| fileInfoId | groupId | size   | cumulativeSize |
+------------+---------+--------+----------------+
|          1 |       1 | 522120 |         522120 |
|          2 |       2 | 316042 |         316042 |
|          4 |       2 | 711084 |        1027126 |
|          5 |       2 | 697002 |        1724128 |
|          6 |       2 | 663425 |        2387553 |
|          7 |       2 | 739553 |        3127106 |
|          8 |       2 | 700938 |        3828044 |
|          9 |       2 | 695614 |        4523658 |
|         10 |       2 | 744204 |        5267862 |
|         11 |       2 | 609022 |        5876884 |
|        ... |     ... |    ... |            ... |
+------------+---------+--------+----------------+
20000 rows in set (19.2161 sec.)

Right now, I use the following query to get the above results

SELECT
  a.fileInfoId
, a.groupId
, a.size
, SUM(b.size) AS cumulativeSize
FROM fileInfo AS a
LEFT JOIN fileInfo AS b USING(groupId)
WHERE a.fileInfoId >= b.fileInfoId
GROUP BY a.fileInfoId
ORDER BY a.groupId, a.fileInfoId

My solution is however, extremely slow. (around 19 seconds without cache).

Explain gives the following execution details

+----+--------------+-------+-------+-------------------+-----------+---------+----------------+-------+-------------+
| id | select_type  | table | type  | possible_keys     | key       | key_len | ref            | rows  | Extra       |
+----+--------------+-------+-------+-------------------+-----------+---------+----------------+-------+-------------+
|  1 | SIMPLE       |     a | index | PRIMARY,foreignId | PRIMARY   |       4 | NULL           | 14905 |             |
|  1 | SIMPLE       |     b | ref   | PRIMARY,foreignId | foreignId |       4 | db.a.foreignId |    36 | Using where |
+----+--------------+-------+-------+-------------------+-----------+---------+----------------+-------+-------------+



My question is:

How can I optimize the above query?



Update
I've updated the question as to provide the table structure and a procedure to fill the table with 20,000 records test data.

CREATE TABLE `fileInfo` (
  `fileInfoId` int(10) unsigned NOT NULL AUTO_INCREMENT
, `groupId` int(10) unsigned NOT NULL
, `name` varchar(128) NOT NULL
, `size` int(10) unsigned NOT NULL
, PRIMARY KEY (`fileInfoId`)
, KEY `groupId` (`groupId`)
) ENGINE=InnoDB;

delimiter $$
DROP PROCEDURE IF EXISTS autofill$$
CREATE PROCEDURE autofill()
BEGIN
    DECLARE i INT DEFAULT 0;
    DECLARE gid INT DEFAULT 0;
    DECLARE nam char(20);
    DECLARE siz INT DEFAULT 0;
    WHILE i < 20000 DO
        SET gid = FLOOR(RAND() * 250);
        SET nam = CONV(FLOOR(RAND() * 10000000000000), 20, 36);
        SET siz = FLOOR((RAND() * 1024 * 1024));
        INSERT INTO `fileInfo` (`groupId`, `name`, `size`) VALUES(gid, nam, siz);
        SET i = i + 1;
    END WHILE;
END;$$
delimiter ;

CALL autofill();

About the possible duplicate question
The question linked by Forgotten Semicolon is not the same question. My question has extra column. because of this extra groupId column, the accepted answer there does not work for my problem. (maybe it can be adapted to work, but I don't know how, hence my question)

Community
  • 1
  • 1
Jacco
  • 23,534
  • 17
  • 88
  • 105
  • possible duplicate of [Calculate a running total in MySQL](http://stackoverflow.com/questions/664700/calculate-a-running-total-in-mysql) – Forgotten Semicolon Jun 29 '10 at 21:28
  • I don't have time, but it's possible (WITH ROLLUP)[http://dev.mysql.com/doc/refman/5.1/en/group-by-modifiers.html] will do what you want without needing the self join... – OMG Ponies Jun 29 '10 at 21:28
  • @OMG Ponies if you find the time to answer, I would be very interested in seeing the With Rollup solution. – Jacco Jun 29 '10 at 21:43
  • Can't promise it'll be faster, but if you can update the question with the scripts to create the table & populate it - that'd help. – OMG Ponies Jun 29 '10 at 21:49
  • `WITH ROLLUP` is blazingly fast. It will do a summary total, where it adds a grand total row after each group being summed, but I can't see how to get it to produce a running total. It would be a great solution if it can be done. – Mike Jun 30 '10 at 13:43
  • @Jacco: The inelegance of my previous solution was bugging me, so after a little more research, I've found something a whole lot better, and tweaked it to work with your requirements. – Mike Jul 01 '10 at 10:11
  • Duplicate of http://stackoverflow.com/questions/2563918/create-a-cumulative-sum-column-in-mysql. – Ztyx Jan 09 '15 at 13:09

2 Answers2

20

You could use a variable - it's far quicker than any join:

SELECT
    id,
    size,
    @total := @total + size AS cumulativeSize,
FROM table, (SELECT @total:=0) AS t;

Here's a quick test case on a Pentium III with 128MB RAM running Debian 5.0:

Create the table:

DROP TABLE IF EXISTS `table1`;

CREATE TABLE `table1` (
    `id` int(11) NOT NULL auto_increment,
    `size` int(11) NOT NULL,
    PRIMARY KEY  (`id`)
) ENGINE=InnoDB;

Fill with 20,000 random numbers:

DELIMITER //
DROP PROCEDURE IF EXISTS autofill//
CREATE PROCEDURE autofill()
BEGIN
    DECLARE i INT DEFAULT 0;
    WHILE i < 20000 DO
        INSERT INTO table1 (size) VALUES (FLOOR((RAND() * 1000)));
        SET i = i + 1;
    END WHILE;
END;
//
DELIMITER ;

CALL autofill();

Check the row count:

SELECT COUNT(*) FROM table1;

+----------+
| COUNT(*) |
+----------+
|    20000 |
+----------+

Run the cumulative total query:

SELECT
    id,
    size,
    @total := @total + size AS cumulativeSize
FROM table1, (SELECT @total:=0) AS t;

+-------+------+----------------+
|    id | size | cumulativeSize |
+-------+------+----------------+
|     1 |  226 |            226 |
|     2 |  869 |           1095 |
|     3 |  668 |           1763 |
|     4 |  733 |           2496 |
...
| 19997 |  966 |       10004741 |
| 19998 |  522 |       10005263 |
| 19999 |  713 |       10005976 |
| 20000 |    0 |       10005976 |
+-------+------+----------------+
20000 rows in set (0.07 sec)

UPDATE

I'd missed the grouping by groupId in the original question, and that certainly made things a bit trickier. I then wrote a solution which used a temporary table, but I didn't like it—it was messy and overly complicated. I went away and did some more research, and have come up with something far simpler and faster.

I can't claim all the credit for this—in fact, I can barely claim any at all, as it is just a modified version of Emulate row number from Common MySQL Queries.

It's beautifully simple, elegant, and very quick:

SELECT fileInfoId, groupId, name, size, cumulativeSize
FROM (
    SELECT
        fileInfoId,
        groupId,
        name,
        size,
        @cs := IF(@prev_groupId = groupId, @cs+size, size) AS cumulativeSize,
        @prev_groupId := groupId AS prev_groupId
    FROM fileInfo, (SELECT @prev_groupId:=0, @cs:=0) AS vars
    ORDER BY groupId
) AS tmp;

You can remove the outer SELECT ... AS tmp if you don't mind the prev_groupID column being returned. I found that it ran marginally faster without it.

Here's a simple test case:

INSERT INTO `fileInfo` VALUES
( 1, 3, 'name0', '10'),
( 5, 3, 'name1', '10'),
( 7, 3, 'name2', '10'),
( 8, 1, 'name3', '10'),
( 9, 1, 'name4', '10'),
(10, 2, 'name5', '10'),
(12, 4, 'name6', '10'),
(20, 4, 'name7', '10'),
(21, 4, 'name8', '10'),
(25, 5, 'name9', '10');

SELECT fileInfoId, groupId, name, size, cumulativeSize
FROM (
    SELECT
        fileInfoId,
        groupId,
        name,
        size,
        @cs := IF(@prev_groupId = groupId, @cs+size, size) AS cumulativeSize,
        @prev_groupId := groupId AS prev_groupId
    FROM fileInfo, (SELECT @prev_groupId := 0, @cs := 0) AS vars
    ORDER BY groupId
) AS tmp;

+------------+---------+-------+------+----------------+
| fileInfoId | groupId | name  | size | cumulativeSize |
+------------+---------+-------+------+----------------+
|          8 |       1 | name3 |   10 |             10 |
|          9 |       1 | name4 |   10 |             20 |
|         10 |       2 | name5 |   10 |             10 |
|          1 |       3 | name0 |   10 |             10 |
|          5 |       3 | name1 |   10 |             20 |
|          7 |       3 | name2 |   10 |             30 |
|         12 |       4 | name6 |   10 |             10 |
|         20 |       4 | name7 |   10 |             20 |
|         21 |       4 | name8 |   10 |             30 |
|         25 |       5 | name9 |   10 |             10 |
+------------+---------+-------+------+----------------+

Here's a sample of the last few rows from a 20,000 row table:

|      19481 |     248 | 8CSLJX22RCO | 1037469 |       51270389 |
|      19486 |     248 | 1IYGJ1UVCQE |  937150 |       52207539 |
|      19817 |     248 | 3FBU3EUSE1G |  616614 |       52824153 |
|      19871 |     248 | 4N19QB7PYT  |  153031 |       52977184 |
|        132 |     249 | 3NP9UGMTRTD |  828073 |         828073 |
|        275 |     249 | 86RJM39K72K |  860323 |        1688396 |
|        802 |     249 | 16Z9XADLBFI |  623030 |        2311426 |
...
|      19661 |     249 | ADZXKQUI0O3 |  837213 |       39856277 |
|      19870 |     249 | 9AVRTI3QK6I |  331342 |       40187619 |
|      19972 |     249 | 1MTAEE3LLEM | 1027714 |       41215333 |
+------------+---------+-------------+---------+----------------+
20000 rows in set (0.31 sec)
Jacco
  • 23,534
  • 17
  • 88
  • 105
Mike
  • 21,301
  • 2
  • 42
  • 65
  • +1: Saw this in the duplicate, I'd only used variables for ranking/windowing functionality... – OMG Ponies Jun 29 '10 at 21:40
  • looks promising, will try it out :) – Jacco Jun 29 '10 at 21:42
  • Sorry, I hadn't spotted the duplicate post - I rushed this answer off before heading to bed. I'm a bit of a fan of variables in SQL queries, and shoehorn them in wherever possible ;-) – Mike Jun 30 '10 at 07:44
  • Unfortunately, your answer calculates the comulative size for all files, not grouped per foreignId (groupId). (also see updated question) – Jacco Jun 30 '10 at 08:50
  • @Jacco: Ah yes, that does kinda put a spanner in the works. Are you able to create a temporary table? If so, I *might* be able to come up with a fast solution. – Mike Jun 30 '10 at 11:34
  • @Mike: It's a dedicated box, so I can create temporary tables if needed. – Jacco Jun 30 '10 at 11:56
  • @Jacco: That's good news - it means my non-working solution isn't completely wasted. I've updated my answer to include an alternative approach using a temporary table. – Mike Jun 30 '10 at 13:08
  • Will a temp-table solution hold itself in a multi-user environment? – Jacco Jun 30 '10 at 16:05
  • @Jacco: There will obviously be a short time where the temp table could be out of sync with the live table, so you may need to take that into account. However, I'm not sure what concurrency issues would arise from a long running query on the live table - that is, if you used a slower query without a temp table. Either way, at some point, there is a chance that write attempts will be made to the live table while your query is running. You could place a lock on the table for the duration of the query. – Mike Jun 30 '10 at 16:22
  • I fiddled around with it a bit, and it works like a charm. Thanks a lot! – Jacco Jul 02 '10 at 12:05
  • @Jacco: Is it the new simpler version that you have used? It does what I was trying to do, prior to the temp table thing. I wanted it to remember the value from the previous row, but couldn't quite get it to work - it was so obvious when I saw it that I could have kicked myself! – Mike Jul 02 '10 at 12:12
  • I've used your 'modified version of Emulate row number', with typo correction. It's brilliantly simple (once somebody else has shown you how to :) – Jacco Jul 02 '10 at 18:31
  • @Jacco: Have I made a typo? That won't do! – Mike Jul 02 '10 at 20:18
1

I think that MySQL is only using one of the indexes on the table. In this case, it's choosing the index on foreignId.

Add a covering compound index that includes both primaryId and foreignId.

Marcus Adams
  • 53,009
  • 9
  • 91
  • 143
  • That'd be redundant when the primary key already will be a clustered index. – OMG Ponies Jun 29 '10 at 21:18
  • @OMG, I thought that the problem was that MySQL was only using one of the indexes on the table. If I'm wrong, please let me know. – Marcus Adams Jun 29 '10 at 21:21
  • You're right - MySQL only uses one index per statement (line in the explain). Depends on the performance, but given that the pk will be a clustered index already - I'd try a separate index for the fk first. – OMG Ponies Jun 29 '10 at 21:24