0

When I run the following mysqldump command, it groups the INSERT's together.

mysqldump --user=username --password=password --host=localhost database | gzip > /parth/to/folder/backup.sql.gz

E.g:

INSERT INTO tbl_name VALUES (1,2,3), (4,5,6), (7,8,9)

This is great for space saving but makes it difficult for me to import a 3GB sql file. I have a script that uploads a large SQL file in parts. It breaks up the lines so that the server/page doesn't time out.

What I'd prefer is this:

INSERT INTO tbl_name VALUES (1,2,3);
INSERT INTO tbl_name VALUES (4,5,6);
INSERT INTO tbl_name VALUES (7,8,9);

Is that possible?

Ben Sinclair
  • 3,896
  • 7
  • 54
  • 94

1 Answers1

2

You can do it via:

mysqldump -uUSER --skip-extended-insert -p DATABASE | gzip > dump.sql.gz

where USER is your user, DATABASE is your database.

But - really, why do that? Multiple insert is better in terms of performance.

Also, hint: if you're typing this command (i.e. it's not a part of some script) - do not provide password since it will be saved in history (by default) - that may be unsecure (in sample above single-standing -p option indicates interactive password prompt)

Alma Do
  • 37,009
  • 9
  • 76
  • 105
  • That's what I wanted. Let's just say I'm a bit of a noob and am using this script to import my database without timing out http://www.ozerov.de/bigdump/. At the moment I don't know all the fancy side of splitting sql files, uploading with sockets etc. This is just a quick way for me to get what I need done until I can learn more about big databases :) Appreciate your help! – Ben Sinclair Aug 29 '13 at 09:15