-2

I have Linux server and a huge mysql table that I need to dump. The thing is, the sever is production and I don’t want to crash it by dumping all at once. Also I intend to pipe it over ssh to another server. Because I don’t want to fill up the disk space. I know about the mysqldump —where clause but I don’t want to script those IDs. Is there any native functionality in mysql that allows dumping in parts? It doesn’t have to be a mysqldump but it needs to be in parts so I don’t crash the server and I’ll need to pipe this over ssh.

Additional info: records are never updated in this table. They are only added

MySQL documentation: as outlined in their docs, mysqldump in not suited for large databases. They suggest to backup raw data files.

E_net4
  • 27,810
  • 13
  • 101
  • 139

1 Answers1

1

If your concern really is the load and not crashing the production, then maybe you should take a look at this post : How can I slow down a MySQL dump as to not affect current load on the server? about how to backup large production databases, using the right mysqldump args.

Slicing a production database may end up more dangerous in the end. Also I don't know how often entries get updated in the db, but slicing the export would give you an inconsistent dump regarding the data, having slices of the same table, from different times

  • Thanks for the link! In my case the records never get updated. They only get added. They are added very often. Let’s say we are talking about tens of millions or records. My main concerns are: the load on the server. And also having such a long-running operation, if it gets interrupted somehow, how will it continue from the place it stopped. I already scripted such an operation, I want to find out if there is some native functionality that ties this. – George Mogilevsky Apr 05 '19 at 10:47
  • I will accept this answer as the most suitable, which actually addresses the question as is – George Mogilevsky Apr 05 '19 at 12:53