My MySQL application is experiencing slow performance when running some UPDATE
, INSERT
and DELETE
queries. In this question, I will only discuss one particular UPDATE
, because it's enough to demonstrate the problem:
UPDATE projects SET ring = 5 WHERE id = 1
This UPDATE
is usually fast enough, around 0.2ms, but every now and then (enough to be a problem) it takes several seconds. Here's an excerpt from the log (look at the 4th line):
~ (0.000282) UPDATE `projects` SET `ring` = 5 WHERE `id` = 1
~ (0.000214) UPDATE `projects` SET `ring` = 6 WHERE `id` = 1
~ (0.000238) UPDATE `projects` SET `ring` = 7 WHERE `id` = 1
~ (3.986502) UPDATE `projects` SET `ring` = 8 WHERE `id` = 1
~ (0.000186) UPDATE `projects` SET `ring` = 9 WHERE `id` = 1
~ (0.000217) UPDATE `projects` SET `ring` = 0 WHERE `id` = 1
~ (0.000162) UPDATE `projects` SET `ring` = 1 WHERE `id` = 1
projects
is an InnoDB table with 6 columns of types INT
and VARCHAR
, 17 rows and an index on id
. It happens with other tables too, but here I'm focusing on this one. When trying to solve the problem, I ensured that the queries were all sequential, so this is not a lock issue. The UPDATE
above is executed in the context of a transaction. Other information on the server:
- VPS with 4GB RAM (was 1GB), 12GB free disk space
- CentoOS 5.8 (was 5.7)
- MySQL 5.5.10 (was 5.0.x)
The "was" bit above means it didn't work before or after the upgrade.
What I've tried so far, to no avail:
- Setting
innodb_flush_log_at_trx_commit
to 0, 1 or 2 - Setting
innodb_locks_unsafe_for_binlog
on or off - Setting
timed_mutexes
on or off - Changing
innodb_flush_method
from the default toO_DSYNC
orO_DIRECT
- Increasing
innodb_buffer_pool_size
from the default to 600M and then to 3000M - Increasing
innodb_log_file_size
from the default to 128M - Compiling MySQL from source
- Running
SHOW PROCESSLIST
, which informs me that the state is "updating" - Running
SHOW PROFILE ALL
, which says that almost all the time was spent on "updating", and that, within that step, not so much time was spent on CPU cycles and there were many voluntary context switches (like 30) - Monitoring
SHOW STATUS
for changes inInnodb_buffer_pool_pages_dirty
. There may be some relation between dirty pages being flushed and the slow queries, but the correlation isn't clear.
Then I decided to check the system's I/O latency with ioping
. This is my first VPS, so I was surprised to see this result:
4096 bytes from . (vzfs /dev/vzfs): request=1 time=249.2 ms
4096 bytes from . (vzfs /dev/vzfs): request=2 time=12.3 ms
4096 bytes from . (vzfs /dev/vzfs): request=3 time=110.5 ms
4096 bytes from . (vzfs /dev/vzfs): request=4 time=232.8 ms
4096 bytes from . (vzfs /dev/vzfs): request=5 time=294.4 ms
4096 bytes from . (vzfs /dev/vzfs): request=6 time=704.7 ms
4096 bytes from . (vzfs /dev/vzfs): request=7 time=1115.0 ms
4096 bytes from . (vzfs /dev/vzfs): request=8 time=209.7 ms
4096 bytes from . (vzfs /dev/vzfs): request=9 time=64.2 ms
4096 bytes from . (vzfs /dev/vzfs): request=10 time=396.2 ms
Pretty erratic, I would say.
Having said all of that, I ask:
Can the I/O latency be occasionally killing MySQL performance? I always thought that, when you ran an
UPDATE
, the thread taking care of that connection wasn't going to flush data to disk or wait for such a flush; it would return immediately and the flushing would be done by another thread at another time.If it can't be disk I/O, is there anything else I can possibly try, short of renting a dedicated server?