0

I am using innodb. I have a problem with app performance. When I run mysqlturner.pl I get:

[--] Data in InnoDB tables: 8G (Tables: 1890) [!!] Total fragmented tables: 1890

Ok, i have run mysqlcheck -p --optimize db. I have diceded that innodb_file_per_table is disabled. The last 2 years a database wasn't reindex. How I can do it?

  1. Make mysqldump?
  2. Stop mysql serice?
  3. insert enable innodb_file_per_table into my.cnf?
  4. Start mysql serice?
  5. Import from mysqldump?
  6. run mysqlcheck -p --optimize db?

Will everything be ok?

Myk Syk
  • 71
  • 8
  • Pretty sure you'll need to drop the tables after dump but before restore; and pretty sure the shared ibdata will not shrink. – Uueerdo Nov 25 '15 at 22:08
  • how "sure the shared ibdata will not shrink"? – Myk Syk Nov 25 '15 at 22:21
  • I haven't dug into the details for a couple years, but last I recall the only supported way to shrink the main `ibdata1` file was to backup your data, uninstall and reinstall MySQL Server, then restore. – Uueerdo Nov 25 '15 at 22:56

2 Answers2

1

Tables fragmented

Bogus.

All InnoDB tables are always (according to that tool) fragmented.

In reality, only 1 table in 1000 needs to be defragmented.

OPTIMIZE TABLE foo; will defrag a single table. But, again, I say "don't bother".

ibdata1 bloated

On the other hand... If you are concerned about the size of ibdata1 and Data_free (SHOW TABLE STATUS) shows that a large chunk of ibdata1 is "free", then the only cure is painful: As already mentioned: dump, stop, remove ibdata1, restart, reload.

But... If you don't have enough disk space to dump everything, you are in deep weeds.

You have 8GB of tables? (Sum Data_length and Index_length across all InnoDB tables.) If ibdata1 is, say 10GB, don't bother. If it is 100GB, then you have a lot of wasted space. But if you are not running out of disk space, again I say "don't bother".

If you are running out of disk space and ibdata1 seems to have a lot "free", then do the dump etc. But additionally: If you have 1890 tables, probably most are "tiny". Maybe a few are somewhat big? Maybe some table fluctuates dramatically in size (eg, add a million rows, then delete most of them)? I suggest having innodb_file_per_table ON when creating big or fluctuating tables; OFF for tiny tables.

Tiny tables take less space when living in ibdata1; big/fluctuating tables can be cleaned up if needed via OPTIMIZE if ever needed.

Rick James
  • 135,179
  • 13
  • 127
  • 222
0

Yes that would do it. Take a backup before you start and test the entire process out first on an copy of the host, esp. the exact mysqldump command, you don't want databases like 'information_schema' in the dump.

Also ensure that other services cannot connect to your db once you start the export - any data they change would be lost during the import.

There are detailed instructions in this previous question: Howto: Clean a mysql InnoDB storage engine?

Community
  • 1
  • 1
Steve E.
  • 9,003
  • 6
  • 39
  • 57