92

I'd like to get dumps of each mysql table into separate files. The manual indicates that the syntax for this is

mysqldump [options] db_name [tbl_name ...]

Which indicates that you know the table names before hand. I could set up the script that knows each table name now, but say I add a new table down the road and forget to update the dump script. Then I'm missing dumps for one or more table.

Is there a way to automagically dump each existing table into a separate file? Or am I going to have to do some script-fu; query the database, get all the table names, and dump them by name.

If I go the script-fu route, what scripting langauges can access a mysql database?

kaiffeetasse
  • 481
  • 1
  • 8
  • 18
user151841
  • 17,377
  • 29
  • 109
  • 171

12 Answers12

93

Here's a script that dumps table data as SQL commands into separate, compressed files. It does not require being on the MySQL server host, doesn't hard-code the password in the script, and is just for a specific db, not all db's on the server:

#!/bin/bash

# dump-tables-mysql.sh
# Descr: Dump MySQL table data into separate SQL files for a specified database.
# Usage: Run without args for usage info.
# Author: @Trutane
# Ref: http://stackoverflow.com/q/3669121/138325
# Notes:
#  * Script will prompt for password for db access.
#  * Output files are compressed and saved in the current working dir, unless DIR is
#    specified on command-line.

[ $# -lt 3 ] && echo "Usage: $(basename $0) <DB_HOST> <DB_USER> <DB_NAME> [<DIR>]" && exit 1

DB_host=$1
DB_user=$2
DB=$3
DIR=$4

[ -n "$DIR" ] || DIR=.
test -d $DIR || mkdir -p $DIR

echo -n "DB password: "
read -s DB_pass
echo
echo "Dumping tables into separate SQL command files for database '$DB' into dir=$DIR"

tbl_count=0

for t in $(mysql -NBA -h $DB_host -u $DB_user -p$DB_pass -D $DB -e 'show tables') 
do 
    echo "DUMPING TABLE: $DB.$t"
    mysqldump -h $DB_host -u $DB_user -p$DB_pass $DB $t | gzip > $DIR/$DB.$t.sql.gz
    tbl_count=$(( tbl_count + 1 ))
done

echo "$tbl_count tables dumped from database '$DB' into dir=$DIR"
rubo77
  • 19,527
  • 31
  • 134
  • 226
Trutane
  • 1,279
  • 12
  • 12
  • 1
    the best solution imo... I'm using a modified version of this script which allows passing the password as env variable. If someone needs this functionality: http://sprunge.us/fBDL?bash – Florian F Jul 31 '14 at 14:14
  • 1
    imho best solution too : + mysqldump --extended-insert=FALSE -u "$mysql_user" -p"$mysql_user_pw" "$db" $t \ | gzip > "$tables_dump_dir/$t.sql.gz" a bit slower , but works on any data and any row data lengths ... – Yordan Georgiev May 16 '15 at 19:46
  • thx! I used your idea to generate a backup script that backups all databases: https://github.com/rubo77/mysql-backup.sh/blob/master/mysql-backup.sh – rubo77 Nov 06 '15 at 05:43
  • 11
    Unfortunately, this can't be done as a single transaction, so you don't necessarily get a series of files that can be re-loaded without any FK errors. – Christopher Schultz Nov 21 '15 at 15:14
  • probably you want to add some more arguments like `mysqldump --extended-insert=FALSE --order-by-primary --complete-insert` – Mladen Adamovic Aug 03 '16 at 16:02
  • super useful. Thanks – Dylan B Sep 15 '16 at 22:12
  • Thanks for tips about additional mysqldump options of interest. Note that since the `--opt` group of options is on by default, you can skip them all using `--skip-opt` or selectively skip the ones you don't want, e.g. `--skip-extended-insert`. See the docs for [option groups](https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html#mysqldump-option-groups). – Trutane Dec 05 '18 at 21:11
  • Also if you want to compare dumped files, disable comments with --skip-comments in order to have strictly identically files if no update has been done to the table : permits CRC comparison – sdespont May 24 '22 at 08:44
  • I'm using it on windows, how can i stop it from closing the command prompt at the end – sairfan Jul 20 '22 at 02:33
  • @sairfan You can put a `pause` or `cmd /k` at the end of your batch file. See this post for details: https://stackoverflow.com/q/988403/138325 – Trutane Dec 16 '22 at 05:59
  • @Trutane this script is perfect. Thanks. Use it manually to dump my database sporadically. Do you have any suggestion on how to make this a cronjob (regarding password entry) – php-b-grader Aug 04 '23 at 04:42
79

The mysqldump command line program does this for you - although the docs are very unclear about this.

One thing to note is that ~/output/dir has to be writable by the user that owns mysqld. On Mac OS X:

sudo chown -R _mysqld:_mysqld ~/output/dir
mysqldump --user=dbuser --password --tab=~/output/dir dbname

After running the above, you will have one tablename.sql file containing each table's schema (create table statement) and tablename.txt file containing the data.

If you want a dump with schema only, add the --no-data flag:

mysqldump --user=dbuser --password --no-data --tab=~/output/dir dbname
Salient Digital
  • 830
  • 7
  • 3
  • 12
    Well, I really would like teh table data dumps to be SQL inserts also :Þ – user151841 Jun 10 '11 at 13:26
  • 2
    `mysqlimport` can be used to import the resulting data txt files. Also any table names containing a period `(.)` eg: `my.table` will only result in a data file of the same name (without .txt extension). http://dev.mysql.com/doc/refman/5.0/en/reloading-delimited-text-dumps.html Additionally this seems to be the only way to perform a `--single-transaction` table per file dump. – Will B. Dec 07 '14 at 04:47
  • 6
    Note that this solution requires the [`FILE`](http://dev.mysql.com/doc/refman/5.7/en/privileges-provided.html#priv_file) privilege on the server, *and* will write the files into the disk on the server. – Christopher Schultz Nov 21 '15 at 15:12
  • 1
    --tab doesn't generate inserts, which would be required if you want to manually join databases on two machines, in example – Mladen Adamovic Aug 03 '16 at 11:31
21

You can accomplish this by:

  1. Get the list of databases in mysql
  2. dump each database with mysqldump
# Optional variables for a backup script
MYSQL_USER="root"
MYSQL_PASS="something"
BACKUP_DIR=/srv/backup/$(date +%Y-%m-%dT%H_%M_%S);
test -d "$BACKUP_DIR" || mkdir -p "$BACKUP_DIR"
# Get the database list, exclude information_schema
for db in $(mysql -B -s -u $MYSQL_USER --password=$MYSQL_PASS -e 'show databases' | grep -v information_schema)
do
  # dump each database in a separate file
  mysqldump -u $MYSQL_USER --password=$MYSQL_PASS "$db" | gzip > "$BACKUP_DIR/$db.sql.gz"
done
Xavi Montero
  • 9,239
  • 7
  • 57
  • 79
Elias Torres Arroyo
  • 2,562
  • 1
  • 14
  • 9
8

Here is the corresponding import.

#!/bin/bash

# import-files-mysql.sh
# Descr: Import separate SQL files for a specified database.
# Usage: Run without args for usage info.
# Author: Will Rubel
# Notes:
#  * Script will prompt for password for db access.

[ $# -lt 3 ] && echo "Usage: $(basename $0) <DB_HOST> <DB_USER> <DB_NAME> [<DIR>]" && exit 1

DB_host=$1
DB_user=$2
DB=$3
DIR=$4

DIR=$DIR/*


echo -n "DB password: "
read -s DB_pass
echo
echo "Importing separate SQL command files for database '$DB' into '$DB'"

file_count=0


for f in $DIR

do 
    echo "IMPORTING FILE: $f"

    gunzip -c $f | mysql -h $DB_host -u $DB_user -p$DB_pass $DB

    (( file_count++ ))
done

echo "$file_count files importing to database '$DB'"
WillRubel
  • 91
  • 1
  • 1
  • 3
    Don't just code-dump; please provide some explanation as to what the code does. – rgettman Aug 06 '13 at 19:59
  • Here is how to use it: ./import-files-mysql.sh host_name dbname db ./ , I assume all your gz files are in current folder, please replace your db login info accordingly, the script will prompt you to enter password after the command. – Dylan B Sep 15 '16 at 22:15
7

I have had recently the need to backup a big database (more than 250GB uncompressed dump file) and I found the answers to this question really helpful.

I started using @Trutane approach and it worked like a charm. But I was concerned about dumping tables in different mysql sessions because that could, in some moment, drive to a non-consistent backup.

After some research and testing, I have developed a different solution based on gawk. The basic idea is creating a dump of the whole database using mysqldump with --single-transaction=true and then process the output with gawk to produce a different file for every table.

So I can call:

mysqldump --single-transaction=true -u DBUSERNAME -p DBNAME | \
  gawk -v 'database=DBNAME' -f 'backup.awk' -

And it produces, in current folder, a bunch of $database.$table.sql files with the schema of every table and $database.$table.sql.gz files with the content of every table. Thanks to the param --single-transaction=true, all the dump happens in a single transaction and data consistency is ensured.

The content of backup.awk is:

# Split mysqldump output in different files, two per table:
# * First file is named $database.$table.sql and it contains the table schema
# * Second file is named $database.$table.sql.gz and it contains the table data 

# The 'database' variable is expected to be provided in command-line
BEGIN {
    insert=0
    filename=sprintf("%s.header.sql", database);
}

# A line starting with "INSERT INTO" activates inserting mode
/^INSERT INTO/      { insert=1 }

# A line containing "-- Table structure for table `name-of-table`" finishes inserting mode
# It is also used to detect table name and change file names accordingly
match($0, /-- Table structure for table `(.*)`/, m) {
    insert=0;
    table=m[1];
    filename=sprintf("%s.%s.sql", database, table);
    print sprintf("Dumping table %s", table);
}

# If in inserting mode, line is piped to a gzipped file,
# if it is not, it is redirected to an uncompressed schema file
{
    if (insert == 1) {
       output = sprintf("gzip > %s.gz", filename);
       print | output
    } else {
       print > filename;
    }
}
  • 2
    The fact that this is done in a single transaction should guarantee a clean backup and deserves more credit than this has received so far (there's another, clumsier IMO, solution above that does it) – Brian C Aug 25 '22 at 11:01
  • Totally agreed with @BrianC. Very nice solution. Thank you. This is what should have been really recommended for this task! – Denis May 18 '23 at 10:42
  • 1
    This script can be improved to be perfect: 1. ```awk /^INSERT INTO/ { insert=1 } ``` 2. ```awk print sprintf("Dumping table %s", table); ``` – Denis May 18 '23 at 12:08
  • Thank you very much for your improvements, I have added them to the post. – Francesc Rambla Jun 16 '23 at 15:38
6
#!/bin/bash

for i in $(mysql -uUser -pPASSWORD DATABASE -e "show tables;"|grep -v Tables_in_);do mysqldump -uUSER -pPASSWORD DATABASE $i > /backup/dir/$i".sql";done

tar -cjf "backup_mysql_"$(date +'%Y%m%d')".tar.bz2" /backup/dir/*.sql
djadk
  • 74
  • 1
  • 3
3

If You want to dump all tables from all databases just combine Elias Torres Arroyo's and Trutane's answer: And if You don't want to give Your password on terminal, just store Your password in an extra config file (chmod 0600)- see Mysqldump launched by cron and password security

#!/bin/bash

# this file
# a) gets all databases from mysql
# b) gets all tables from all databases in a)
# c) creates subfolders for every database in a)
# d) dumps every table from b) in a single file

    # this is a mixture of scripts from Trutane (http://stackoverflow.com/q/3669121/138325) 
    # and Elias Torres Arroyo (https://stackoverflow.com/a/14711298/8398149)

# usage: 
# sk-db.bash parameters
# where pararmeters are:

# d "dbs to leave"
# t " tables to leave"
# u "user who connects to database"
# h "db host"
# f "/backup/folder"



user='root'
host='localhost'
backup_folder=''
leave_dbs=(information_schema mysql)
leave_tables=()
while getopts ":d:t:u:h:f:" opt; do
  case $opt in
    d) leave_dbs=( $OPTARG )
    ;;
    t) leave_tables=( $OPTARG )
    ;;
    u) user=$OPTARG
    ;;
    h) host=$OPTARG
    ;;
    f) backup_folder=$OPTARG
    ;;

    \?) echo "Invalid option -$OPTARG" >&2
    ;;
  esac
done
echo '****************************************'
echo "Database Backup with these options"
echo "Host $host"
echo "User $user"
echo "Backup in $backup_folder"
echo '----------------------------------------'
echo "Databases to emit:"
printf "%s\n" "${leave_dbs[@]}"
echo '----------------------------------------'
echo "Tables to emit:"
printf "%s\n" "${leave_tables[@]}"
echo '----------------------------------------'


BACKUP_DIR=$backup_folder/$(date +%Y-%m-%dT%H_%M_%S);
CONFIG_FILE=/root/db-config.cnf

function contains() {
    local n=$#
    local value=${!n}
    for ((i=1;i < $#;i++)) {
        if [ "${!i}" == "${value}" ]; then
            echo "y"
            return 0
        fi
    }
    echo "n"
    return 1
}


test -d "$BACKUP_DIR" || mkdir -p "$BACKUP_DIR"
# Get the database list, exclude information_schema
database_count=0
tbl_count=0

for db in $(mysql --defaults-extra-file=$CONFIG_FILE -B -s -u $user -e 'show databases' )
do
    if [ $(contains "${leave_dbs[@]}" "$db") == "y" ]; then
        echo "leave database $db as requested"
    else

       # dump each database in a separate file
       (( database_count++ ))
       DIR=$BACKUP_DIR/$db
       [ -n "$DIR" ] || DIR=.

       test -d $DIR || mkdir -p $DIR

       echo
       echo "Dumping tables into separate SQL command files for database '$db' into dir=$DIR"

       for t in $(mysql --defaults-extra-file=$CONFIG_FILE -NBA -h $host -u $user -D $db -e 'show tables')
       do
           if [ $(contains "${leave_tables[@]}" "$db.$t") == "y" ]; then
               echo "leave table $db.$t as requested"
           else
               echo "DUMPING TABLE: $db.$t"
  #            mysqldump --defaults-extra-file=$CONFIG_FILE -h $host -u $user $db $t  > $DIR/$db.$t.sql
               tbl_count=$(( tbl_count + 1 ))
           fi
       done

       echo "Database $db is finished"
       echo '----------------------------------------'

    fi
done
echo '----------------------------------------'
echo "Backup completed"
echo '**********************************************'

And also, this helped:

Check if bash array contains value

arrays in bash

named arguments in script

sneaky
  • 439
  • 1
  • 6
  • 18
2

It looks everybody here forgot of autocommit=0;SET unique_checks=0;SET foreign_key_checks=0; that is suppose to speed up the import process ...

#!/bin/bash
MYSQL_USER="USER"
MYSQL_PASS="PASS"

if [ -z "$1" ]
  then
    echo "Dumping all DB ... in separate files"
    for I in $(mysql -u $MYSQL_USER --password=$MYSQL_PASS -e 'show databases' -s --skip-column-names); 
    do 
      echo "SET autocommit=0;SET unique_checks=0;SET foreign_key_checks=0;" > "$I.sql"
      mysqldump -u $MYSQL_USER --password=$MYSQL_PASS $I >> "$I.sql"; 
      echo "SET autocommit=1;SET unique_checks=1;SET foreign_key_checks=1;commit;" >> "$I.sql"
      gzip "$I.sql"
    done
    echo "END."
else
      echo "Dumping $1 ..."
      echo "SET autocommit=0;SET unique_checks=0;SET foreign_key_checks=0;" > "$1.sql"
      mysqldump -u $MYSQL_USER --password=$MYSQL_PASS $1 >> "$1.sql"; 
      echo "SET autocommit=1;SET unique_checks=1;SET foreign_key_checks=1;commit;" >> "$1.sql"
      gzip "$1.sql"
fi
WonderLand
  • 5,494
  • 7
  • 57
  • 76
0

I'm not bash master, but I'd just do it with a bash script. Without hitting MySQL, with knowledge of the data directory and database name, you could just scan for all .frm files (one for every table in that db/directory) for a list of tables.

I'm sure there are ways to make it slicker and accept arguments or whatnot, but this worked well for me.

tables_in_a_db_to_sql.sh

#!/bin/bash

database="this_is_my_database"
datadir="/var/lib/mysql/"
datadir_escaped="\/var\/lib\/mysql\/"

all_tables=($(ls $datadir$database/*.frm | sed s/"$datadir_escaped$database\/"/""/g | sed s/.frm//g))

for t in "${all_tables[@]}"; do
        outfile=$database.$t.sql
        echo "-- backing up $t to $outfile"
        echo "mysqldump [options] $database $t > $outfile"
        # mysqldump [options] $database $t > $outfile
done

Fill in the [options] and desired outfile convention as you need, and uncomment the last mysqldump line.

Riedsio
  • 9,758
  • 1
  • 24
  • 33
  • this code when i run show this error msg: Syntax error: "(" unexpected (maybe its line no: 9 ).any suggestion – Syed Ahmed Dec 24 '12 at 11:23
0

For Windows Servers, you can use a batch file like so:

set year=%DATE:~10,4%
set day=%DATE:~7,2%
set mnt=%DATE:~4,2%
set hr=%TIME:~0,2%
set min=%TIME:~3,2%

IF %day% LSS 10 SET day=0%day:~1,1%
IF %mnt% LSS 10 SET mnt=0%mnt:~1,1%
IF %hr% LSS 10 SET hr=0%hr:~1,1%
IF %min% LSS 10 SET min=0%min:~1,1%

set backuptime=%year%-%mnt%-%day%-%hr%-%min%
set backupfldr=C:\inetpub\wwwroot\backupfiles\
set datafldr="C:\Program Files\MySQL\MySQL Server 5.5\data"
set zipper="C:\inetpub\wwwroot\backupfiles\zip\7za.exe"
set retaindays=21

:: Switch to the data directory to enumerate the folders
pushd %datafldr%

:: Get all table names and save them in a temp file
mysql --skip-column-names --user=root --password=mypassword mydatabasename -e "show tables" > tables.txt

:: Loop through all tables in temp file so that we can save one backup file per table
for /f "skip=3 delims=|" %%i in (tables.txt) do (
  set tablename = %%i
  mysqldump --user=root --password=mypassword mydatabasename %%i > "%backupfldr%mydatabasename.%backuptime%.%%i.sql"
)
del tables.txt

:: Zip all files ending in .sql in the folder
%zipper% a -tzip "%backupfldr%backup.mydatabasename.%backuptime%.zip" "%backupfldr%*.sql"

echo "Deleting all the files ending in .sql only"
del "%backupfldr%*.sql"

echo "Deleting zip files older than 21 days now"
Forfiles /p %backupfldr% /m *.zip /d -%retaindays% /c "cmd /c del /q @path"

Then schedule it using Windows Task Scheduler.

Also, if you want to exclude certain tables in your backup, note that you can use a where clause on the "show tables" statement, but the column name depends on your database name.

So for example, if your database name is "blah" then your column name in the "show tables" result set will be "tables_in_blah". Which means you could add a where clause something similar to this:

show tables where tables_in_blah <> 'badtable'

or

show tables where tables_in_blah like '%goodtable%'
Vincent
  • 1,741
  • 23
  • 35
  • did not work, error while running command `>mysql --skip--columns-names` – sairfan Jul 20 '22 at 17:59
  • @sairfan possibly you passed in the wrong password or database name, or your user does not have the privilege to create a new file (as in > tables.txt) – Vincent Jul 20 '22 at 18:37
0

They complete the route where they would store the backups. East case we are creating one per day of the week, so we have 7 days of backup and they are recycled. check how many databases it has and then how many tables each database has. and create a file named by db.tablename.sql which can then be restored. regards

#!/bin/bash
USER="root"
MYSQL_PASSWORD="password"
RUTA=/hdd/backup/mysql
diasemana=$(date +\%w)
mkdir -m 7777 $RUTA
mkdir -m 7777 $RUTA/infodb
mkdir -m 7777 $RUTA/$diasemana
mysql -u$USER -p$MYSQL_PASSWORD -e "SHOW DATABASES where \`Database\` <> 'information_schema' and \`Database\` <> 'mysql' and  \`Database\` <> 'sys' and \`Database\` <> 'performance_schema';" -N > $RUTA/infodb/db.txt;
for i in $(cat $RUTA/infodb/db.txt);
do
    mysql -u$USER -p$MYSQL_PASSWORD -e "USE $i;show tables;" -N >$RUTA/infodb/$i.txt;
    for j in $(cat $RUTA/infodb/$i.txt);
    do
    mysqldump -u$USER -p$MYSQL_PASSWORD $i $j > $RUTA/$diasemana/$i"_"$j".sql";
        echo $RUTA/$diasemana/$i"_"$j".sql"
    done
done
-1

See the following article by Pauli Marcus:

Howto split a SQL database dump into table-wise files

Splitting a sql file containing a whole database into per-table files is quite easy: Grep the .sql for any occurence of DROP TABLE. Generate the file name from the table name that is included in the DROP TABLE statement. Echo the output to a file. Here is a little script that expects a .sql file as input:

#!/bin/bash

file=$1 # the input file
directory="$file-splitted" # the output directory
output="$directory/header" # the first file containing the header
GREP="DROP TABLE" # what we are looking for

mkdir $directory # create the output directory

while read line
do
   # if the current line contains the wanted statement
   if [ $(echo "$line" | grep -c "$GREP") == "1" ]
   then
      # extract the file name
      myfile=$(echo $line | awk '{print $5}' | sed -e 's/`//g' -e 's/;//g')
      # set the new file name
      output="$directory/$myfile"
   fi
       echo "$line" >> $output # write to file
done < $file
kenorb
  • 155,785
  • 88
  • 678
  • 743