0

I having a table which contains around 2 million records, all of which are for same node. I cannot partition based on date, as data is continually inserted and deleted each day.

The remaining data is in string format.

Every 15 minutes, the application takes backups from this table (only partial data which was collected for that 15 minutes); while doing this operation, the table is locked and unable to be accessed.

What kind of partitioning can I use to help solve this problem?

Troy Alford
  • 26,660
  • 10
  • 64
  • 82
Phanindra
  • 221
  • 1
  • 3
  • 10
  • 1
    20 lacks is not a very large number. Can you please explain why you want to partition, so that people can help you better. – Zimbabao Feb 18 '11 at 12:50
  • 2
    For those not familiar with the Indian way of numbering, 1 lakh = 100.000 (100k). So it's a table with 2 million rows. – Augusto Feb 18 '11 at 12:59
  • 1
    Can you post the table definition? Without knowing what's in the table, we can't make suggestions. Also, even if data will be deleted and inserted each day, that still does not block you from partitioning on date: earlier partitions don't suddenly become read-only. – Konerak Mar 05 '11 at 14:48

1 Answers1

0

If the problem you're trying to solve is the locking by the backup, partitioning won't help solve that.

Also, there are better solutions to avoid locking during backups. These solutions work equally well with or without partitioned tables.

If you use InnoDB tables (which you should), you can dump data with transaction isolation instead of locking.

mysqldump --single-transaction mydatabase mytable ...

Read the manual for more information: https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html

You can also use Percona XtraBackup to create a physical backup instead of a data dump. This also depends on using InnoDB tables. Percona XtraBackup is free software.

Bill Karwin
  • 538,548
  • 86
  • 673
  • 828