We have two tables:
create table A (
id int,
a int,
b int,
c varchar(255),
unique key K1 (a,c),
key K2 (b,c),
key K3 (a,b,c)
);
create table B (
id int,
b int,
c varchar(255),
x varchar(255),
unique key K (b,c)
);
Running queries like:
SELECT B.x
FROM B
INNER JOIN A
ON A.b = B.b
AND A.c = B.c
WHERE A.a IN (...a values...);
a values
are given from the client language (our case: Ruby), and about 10-100,000 items.
The explain is like this.
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: A
type: range
key: K
key_len: 4
ref: NULL
rows: 100
Extra: Using where; Using index
*************************** 2. row ***************************
id: 1
select_type: SIMPLE
table: app_devices
type: ref
key: K
key_len: 4
ref: A.b
rows: 213
Extra: Using index condition
2 rows in set (0.00 sec)
This query works great when the A and B are small, but it becomes slow when table size exceeds about 20 million rows. I suspect that composite key is not working well on join. How can I solve this?