I am seeing unexplained performance issues with MySql queries.
The data is a MySql InnoDB table with 3.85 million rows of item-to-item correlation data.
For item "item_i", another item "also_i" was ordered by "count_i" people.
CREATE TABLE `hl_also2sm` (
`item_i` int(10) unsigned NOT NULL DEFAULT '0',
`also_i` int(10) unsigned NOT NULL DEFAULT '0',
`count_i` int(10) unsigned NOT NULL DEFAULT '0',
PRIMARY KEY (`item_i`,`also_i`),
KEY `count_i` (`count_i`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
A sample correlation is done by taking a list of items, finding the correlating items, and returning the approximate time the MySql query took to run.
// Javascript in NodeJS with MySql, on Debian Linux
var sql = require('./routes/sqlpool'); // connects to DB
var cmd = util.promisify(sql.cmd); // Promise of raw MySql command function
async function inquiry(NumberOfItems){
// generate random list of items to perform correlation against
var rtn = await cmd(`select DISTINCT item_i from hl_also2sm order by RAND() limit ${NumberOfItems}`);
var items_l = rtn.map((h)=>{return h.item_i});
var ts = Date.now();
// get top 50 correlated items
var c = `select also_i,COUNT(*) as cnt,SUM(count_i) as sum from hl_also2sm
where item_i IN (${items_l.join(",")})
AND also_i NOT IN (${items_l.join(",")})
group by also_i
order by cnt DESC,sum DESC limit 50`;
await cmd(c);
var MilliSeconds = Date.now()-ts;
return MilliSeconds;
};
To test this over a range of items
async function inquiries(){
for (items=200;items<3000;items+=200) {
var Data = [];
for (var i=0;i<10;i++) {
Data.push(await inquiry(items));
}
Data.sort();
console.log(`${items} items - min:${Data[0]} max:${Data[9]}`);
}
The results being
200 items - min:315 max:331
400 items - min:1214 max:1235
600 items - min:2669 max:2718
800 items - min:4796 max:4823
1000 items - min:6872 max:7006
1200 items - min:134 max:154
1400 items - min:147 max:169
1600 items - min:162 max:198
1800 items - min:190 max:212
2000 items - min:210 max:244
2200 items - min:237 max:258
2400 items - min:248 max:293
2600 items - min:263 max:302
2800 items - min:292 max:322
which is very puzzling.
Why is 2000 items over 25X faster than 1000 items??
The 1000 item select EXPLAIN is
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
| 1 | SIMPLE | hl_also2sm | index | PRIMARY | count_i | 4 | NULL | 4043135 | Using where; Using index; Using temporary; Using filesort |
the 2000 select EXPLAIN is
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
| 1 | SIMPLE | hl_also2sm | range | PRIMARY | PRIMARY | 4 | NULL | 758326 | Using where; Using temporary; Using filesort |
I ran this many times, each producing similar results.
Yes, many of my users have shown interest in thousands of items through pageviews, commenting on, viewing pictures of, or ordering. I would like to produce a good "you might also like" for them.
Summary of problem
select also_i,
COUNT(*) as cnt,
SUM(count_i) as sum
from hl_also2sm
where item_i IN (...) -- Varying the number of IN items
AND also_i NOT IN (...) -- Varying the number of IN items
group by also_i
order by cnt DESC, sum DESC
limit 50
For <= 1K items in the IN
lists, the query uses KEY(count_i)
runs slower.
For > 1K items in the IN
lists, the query does a table scan and runs faster.
Why??