I have a large SQL file with one database and about 150 tables. I would like to use mysqlimport
to import that database, however, I would like the import process to ignore or skip over a couple of tables. What is the proper syntax to import all tables, but ignore some of them? Thank you.

- 1,435
- 1
- 17
- 25

- 3,467
- 9
- 55
- 104
-
i don't know if that is possible. can you create a backup of the .sql, seek them out in the text and clobber them or is this something you are going to be doing again and again and again. you could a CTRL-H for find and replace with a REM stmt, or at the end run a little "delete from table1;" "delete from table2" sorta script – Drew May 24 '13 at 12:52
-
Are there a lot that need to be omitted? And how are they identified? Maybe consider importing the whole thing, then run a delete table on the ones you don't want to keep. – lurker May 24 '13 at 12:53
-
I think that there may be a problem with one of the tables in the SQL file, so my import process does not complete. Do you know whether using the import process from the command line is different from using import utility from MySQL Workbench? – DanielAttard May 24 '13 at 13:00
-
Related: https://stackoverflow.com/questions/425158/how-to-skip-certain-database-tables-with-mysqldump – Jesse Nickles Aug 01 '22 at 18:45
9 Answers
The accepted answer by RandomSeed could take a long time! Importing the table (just to drop it later) could be very wasteful depending on size.
For a file created using
mysqldump -u user -ppasswd --opt --routines DBname > DBdump.sql
I currently get a file about 7GB, 6GB of which is data for a log table that I don't 'need' to be there; reloading this file takes a couple of hours. If I need to reload (for development purposes, or if ever required for a live recovery) I skim the file thus:
sed '/INSERT INTO `TABLE_TO_SKIP`/d' DBdump.sql > reduced.sql
And reload with:
mysql -u user -ppasswd DBname < reduced.sql
This gives me a complete database, with the "unwanted" table created but empty. If you really don't want the tables at all, simply drop the empty tables after the load finishes.
For multiple tables you could do something like this:
sed '/INSERT INTO `TABLE1_TO_SKIP`/d' DBdump.sql | \
sed '/INSERT INTO `TABLE2_TO_SKIP`/d' | \
sed '/INSERT INTO `TABLE3_TO_SKIP`/d' > reduced.sql
There IS a 'gotcha' - watch out for procedures in your dump that might contain "INSERT INTO TABLE_TO_SKIP".

- 2,317
- 2
- 16
- 16
-
4wow!!! awesome, i've just reduced a backup from 1GB to 72MB, save me a lot of time, thanks – Castro Roy Mar 26 '15 at 17:39
-
This makes sense, absolutely. You might want to trim the `CREATE TABLE` statements too. – RandomSeed Feb 16 '16 at 10:27
-
23Consider the ```sed -r``` option and a RegExp in case you want to eliminate more tables in one go, like: ```sed -r '/INSERT INTO `(TABLE1_TO_SKIP|TABLE2_TO_SKIP)`/d' DBdump.sql > reduced.sql``` – Xosofox Jun 23 '16 at 14:18
-
3just FLY: `TABLE_TO_SKIP` is case-sensitive, it's usually `table_to_skip` :) – res Aug 21 '16 at 22:48
-
4for windows find for "sed for windows" - now it can be found here: http://gnuwin32.sourceforge.net/packages/sed.htm - and use double quot instead of single quote for the command – gsziszi Apr 09 '19 at 14:57
-
If `sed` is not an option or you need case-insensitive ignoring: `grep -viE 'INSERT INTO \`(TABLE1_TO_SKIP|TABLE2_TO_SKIP)\` DBdump.sql > reduced.sql` – DUzun Jul 11 '19 at 08:17
-
I am still running the import. Don't know if that worked yet. If it works, I could not express how thankful I would be. – Igorzovisk Jul 11 '19 at 23:30
-
FYI: the `sed` command using the `d` flag is used for deleting lines which match the pattern between `/../` – abax Aug 22 '19 at 16:50
-
1just beware that ```-- Table structure for table
DROP TABLE IF EXISTS – CT. Jan 10 '20 at 14:38; CREATE TABLE ( -- Dumping data for table LOCK TABLES WRITE; ``` are not removed with this sed command if they exist. Some were not aware, but im not gonna be naming myself -
"sed '/INSERT INTO `TABLE_TO_SKIP`/d' DBdump.sql > reduced.sql" after this i import and the TABLE_TO_SKIP has been deleted from db already exists after import!!! ----------- it shouldn't becouse this line deleted only INSERT not CREATE Table with DELETE IF EXISTS which is standard included.. in mysqldump. So this scrip makes bad bad bad way!! <---------- read tthis important note – Kamil Dąbrowski Jul 25 '20 at 21:10
-
@CT I can concur that _some_ were not aware, including myself. Cheers for that. – Joshua Pinter Feb 05 '21 at 21:54
-
Since deleting `INSERT`s may cause errors while restoring some triggers and procedures, I propose using `-f` flag in MySQL restore command to skip errors. Afterwhile you may check for corrupted triggers and procedures. – s.abbaasi Jan 17 '23 at 14:35
For anyone working with .sql.gz files; I found the following solution to be very useful. Our database was 25GB+ and I had to remove the log tables.
gzip -cd "./mydb.sql.gz" | sed -r '/INSERT INTO `(log_table_1|log_table_2|log_table_3|log_table_4)`/d' | gzip > "./mydb2.sql.gz"
Thanks to the answer of Don and comment of Xosofox and this related post: Use zcat and sed or awk to edit compressed .gz text file

- 759
- 6
- 10
-
-
Hm can you use wildcard in the sed line too? `log_table_*` or such? – Olle Härstedt Jul 06 '23 at 11:55
-
The `-r` in `sed -r '/INSERT INTO.../d'` is short for `--regexp-extended` (https://linux.die.net/man/1/sed). So you should be able to use regular expressions. – feskr Jul 07 '23 at 14:10
mysqlimport
is not the right tool for importing SQL statements. This tool is meant to import formatted text files such as CSV. What you want to do is feed your sql dump directly to the mysql
client with a command like this one:
bash > mysql -D your_database < your_sql_dump.sql
Neither mysql
nor mysqlimport
provide the feature you need. Your best chance would be importing the whole dump, then dropping the tables you do not want.
If you have access to the server where the dump comes from, then you could create a new dump with mysqldump --ignore-table=database.table_you_dont_want1 --ignore-table=database.table_you_dont_want2 ...
.
Check out this answer for a workaround to skip importing some table

- 13,908
- 2
- 36
- 47

- 29,301
- 6
- 52
- 87
-
Thanks for the comment. My problem seems to be that when I use the import command from within MySQL Workbench, this process works but only imports about a third of my tables and then hangs on a specific table. I can't get all the tables imported because of the problem table. – DanielAttard May 24 '13 at 12:57
-
1Try to import your dump directly from the command line. Are you getting the same issue? At least you might be able to see some error message. – RandomSeed May 24 '13 at 13:11
Little old, but figure it might still come in handy...
I liked @Don's answer (https://stackoverflow.com/a/26379517/1446005) but found it very annoying that you'd have to write to another file first...
In my particular case this would take too much time and disc space
So I wrote a little bash script:
#!/bin/bash
tables=(table1_to_skip table2_to_skip ... tableN_to_skip)
tableString=$(printf "|%s" "${tables[@]}")
trimmed=${tableString:1}
grepExp="INSERT INTO \`($trimmed)\`"
zcat $1 | grep -vE "$grepExp" | mysql -uroot -p
this does not generate a new sql script but pipes it directly to the database
also, it does create the tables, just doesn't import the data (which was the problem I had with huge log tables)

- 2,951
- 1
- 18
- 11
Unless you have ignored the tables during the dump with mysqldump --ignore-table=database.unwanted_table
, you have to use some script or tool to filter out the data you don't want to import from the dump file before passing it to mysql
client.
Here is a bash/sh function that would exclude the unwanted tables from a SQL dump on the fly (through pipe):
# Accepts one argument, the list of tables to exclude (case-insensitive).
# Eg. filt_exclude '%session% action_log %_cache'
filt_exclude() {
local excl_tns;
if [ -n "$1" ]; then
# trim & replace /[,;\s]+/ with '|' & replace '%' with '[^`]*'
excl_tns=$(echo "$1" | sed -r 's/^[[:space:]]*//g; s/[[:space:]]*$//g; s/[[:space:]]+/|/g; s/[,;]+/|/g; s/%/[^\`]\*/g');
grep -viE "(^INSERT INTO \`($excl_tns)\`)|(^DROP TABLE (IF EXISTS )?\`($excl_tns)\`)|^LOCK TABLES \`($excl_tns)\` WRITE" | \
sed 's/^CREATE TABLE `/CREATE TABLE IF NOT EXISTS `/g'
else
cat
fi
}
Suppose you have a dump created like so:
MYSQL_PWD="my-pass" mysqldump -u user --hex-blob db_name | \
pigz -9 > dump.sql.gz
And want to exclude some unwanted tables before importing:
pigz -dckq dump.sql.gz | \
filt_exclude '%session% action_log %_cache' | \
MYSQL_PWD="my-pass" mysql -u user db_name
Or you could pipe into a file or any other tool before importing to DB.

- 1,693
- 17
- 15
If desired, you can do this one table at a time:
mysqldump -p sourceDatabase tableName > tableName.sql
mysql -p -D targetDatabase < tableName.sql

- 5,276
- 4
- 30
- 41
Here is my script to exclude some tables from mysql dump I use it to restore DB when need to keep orders and payments data
exclude_tables_from_dump.sh
#!/bin/bash
if [ ! -f "$1" ];
then
echo "Usage: $0 mysql_dump.sql"
exit
fi
declare -a TABLES=(
user
order
order_product
order_status
payments
)
CMD="cat $1"
for TBL in "${TABLES[@]}";do
CMD+="|sed 's/DROP TABLE IF EXISTS \`${TBL}\`/# DROP TABLE IF EXIST \`${TBL}\`/g'"
CMD+="|sed 's/CREATE TABLE \`${TBL}\`/CREATE TABLE IF NOT EXISTS \`${TBL}\`/g'"
CMD+="|sed -r '/INSERT INTO \`${TBL}\`/d'"
CMD+="|sed '/DELIMITER\ \;\;/,/DELIMITER\ \;/d'"
done
eval $CMD
It avoid DROP and reCREATE of tables and inserting data to this tables. Also it strip all FUNCTIONS and PROCEDURES that stored between DELIMITER ;; and DELIMITER ;

- 1,106
- 13
- 17
I would not use it on production but if I would have to import some backup quickly that contains many smaller table and one big monster table that might take hours to import I would most probably "grep -v unwanted_table_name original.sql > reduced.sql
and then mysql -f < reduced.sql

- 2,653
- 2
- 13
- 9
-
You might want to watch out for a situation where your unwanted_table_name is 'SALES' but you have another table called 'BIGSALES' or maybe a foreign key field called, maybe `another_table_SALES_fk`. Your answer as it stands could lead to a world of pain. ALSO if you are going to go down this route, surely fgrep would be much faster than grep? – Don Jun 18 '20 at 15:00
Beside the great answers in this topic, It could be that your .sql file does not have the INSERTS with this for every line:
INSERT INTO `log`
But they are actually inserted like this:
LOCK TABLES `log` WRITE;
INSERT INTO `log` (`id`, `data`)
VALUES
(1,'The log content'),
(2,'The log content'),
(3,'The log content'),
(4,'The log content')
UNLOCK TABLES;
You'll want to exclude everything between LOCK TABLES `log` WRITE;
and the next UNLOCK TABLES;
gzip -dc "$1" | sed \
-e '/LOCK TABLES `log` WRITE;/,/^UNLOCK TABLES/d;' \
-e '/LOCK TABLES `some_other_table` WRITE;/,/^UNLOCK TABLES/d;' \
| gzip > reduced_$(date '+%F_%T').sql.gz
Put that in a reduce_db.sh somewhere on your computer and every time you need it you execute it like this
bash /folder/to/reduce_db.sh /path/to/big_db_file.sql.gz
Good to know, only exclude data of which no other data in the database is dependent on. Otherwise your reduced database is inconsistent in the end.

- 2,658
- 1
- 21
- 21