19

We have a series of tables that have grown organically to several million rows, in production doing an insert or update can take up to two seconds. However if I dump the table and recreate it from the dump queries are lightning fast.

We have rebuilt one of the tables by creating a copy rebuilding the indexes and then doing a rename switch and copying over any new rows, this worked because that table is only ever appended to. Doing this made the inserts and updates lightning quick.

My questions:

Why do inserts get slow over time? Why does recreating the table and doing an import fix this? Is there any way that I can rebuild indexes without locking a table for updates?

Greg
  • 765
  • 2
  • 9
  • 17

5 Answers5

11

It sounds like it's either

  • Index unbalancing over time
  • Disk fragmentation
  • Internal innodb datafile(s) fragmentation

You could try analyze table foo which doesn't take locks, just a few index dives and takes a few seconds.

If this doesn't fix it, you can use

mysql> SET PROFILING=1;
mysql> INSERT INTO foo ($testdata);
mysql> show profile for QUERY 1;

and you should see where most of the time is spent.

Apparently innodb performs better when inserts are done in PK order, is this your case?

ggiroux
  • 6,544
  • 1
  • 22
  • 23
  • Do you have any links to info about index unbalancing? Also I think you may be onto something with the disk being fragmented... I am going to look into this. – Greg Feb 10 '10 at 11:09
  • I remember being hit by this http://bugs.mysql.com/bug.php?id=43660, which is more a case of "wrong index stats on 64 bits platforms", but that can lead to the same kind of issues. – ggiroux Feb 10 '10 at 16:46
10

InnoDB performance is heavily dependent on RAM. If the indexes don't fit in RAM, performance can drop considerably and quickly. Rebuild the whole table improves performance because the data and indexes are now optimized.

If you are only ever inserting into the table, MyISAM is better suited for that. You won't have locking issues if only appending, since the record is added to the end of the file. MyISAM will also allow you to use MERGE tables, which are really nice for taking parts of the data offline or archiving without having to do exports and/or deletes.

Brent Baisley
  • 12,641
  • 2
  • 26
  • 39
  • Citation needed for "heavily dependent on RAM". – Pacerier Apr 09 '15 at 13:28
  • 2
    Details on how InnoDB [uses memory](http://dev.mysql.com/doc/refman/5.5/en/innodb-buffer-pool.html). Note there have been large improvements in InnoDB from 5.0 to 5.1, 5.5 and beyond. – Brent Baisley Apr 10 '15 at 14:27
1

track down the in use my.ini and increase the key_buffer_size I had a 1.5GB table with a large key where the Queries per second (all writes) were down to 17. I found it strange that the in the administration panel (while the table was locked for writing to speed up the process) it was doing 200 InnoDB reads per second to 24 writes per second.

It was forced to read the index table off disk. I changed the key_buffer_size from 8M to 128M and the performance jumped to 150 queries per second completed and only had to perform 61 reads to get 240 writes. (after restart)

ppostma1
  • 3,616
  • 1
  • 27
  • 28
  • 1
    for innoDB, the innodb_buffer_pool_size directly impacts the amount of rows/indexes stored in memory which reduces collision lookups prior to writing. – ppostma1 Feb 28 '14 at 22:15
  • 2
    key_buffer_size has no effect for innodb, its a myisam setting. – Steve Childs Feb 03 '16 at 22:05
1

Updating a table requires indices to be rebuilt. If you are doing bulk inserts, try to do them in one transaction (as the dump and restore does). If the table is write-biased I would think about dropping the indices anyway or let a background job do read-processing of the table (eg by copying it to an indexed one).

hurikhan77
  • 5,881
  • 3
  • 32
  • 47
  • 1
    I dont think you read the question, the inserts / updates are very fast when the indexes have been rebuilt ie. with optimize table, they are only slow when the table growth is organic. I want to know why this is. – Greg Feb 09 '10 at 22:09
  • For some reason I was biased that you do bulk updates, ie sometimes adding more rows, sometimes adding less in bursts. I suppose it is fast as soon as you drop indices? – hurikhan77 Feb 09 '10 at 23:16
  • I have not tried with dropping indices, I cant do that in production and as soon as I import a dump on a test machine it is fast. I assume that dropping the indices will make it fast though. I may to try and copy the data files to a test machine to test this. – Greg Feb 09 '10 at 23:32
  • So then maybe "INSERT DELAYED" helps? Probably no support for innodb however... :-/ – hurikhan77 Feb 10 '10 at 00:05
  • Yeah not supported in InnoDB, I think this gives a bunch more problems too. – Greg Feb 10 '10 at 11:07
0

Could it be due to fragmentation of XFS?

Copy/pasted from http://stevesubuntutweaks.blogspot.com/2010/07/should-you-use-xfs-file-system.html :

To check the fragmentation level of a drive, for example located at /dev/sda6:

sudo xfs_db -c frag -r /dev/sda6

The result will look something like so:

actual 51270, ideal 174, fragmentation factor 99.66%

That is an actual result I got from the first time I installed these utilities, previously having no knowledge of XFS maintenance. Pretty nasty. Basically, the 174 files on the partition were spread over 51270 separate pieces. To defragment, run the following command:

sudo xfs_fsr -v /dev/sda6

Let it run for a while. the -v option lets it show the progress. After it finishes, try checking the fragmentation level again:

sudo xfs_db -c frag -r /dev/sda6

actual 176, ideal 174, fragmentation factor 1.14%

Much better!

Community
  • 1
  • 1
Rafa
  • 1,397
  • 15
  • 21