363

I get this error when I try to source a large SQL file (a big INSERT query).

mysql>  source file.sql
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id:    2
Current database: *** NONE ***

ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id:    3
Current database: *** NONE ***

Nothing in the table is updated. I've tried deleting and undeleting the table/database, as well as restarting MySQL. None of these things resolve the problem.

Here is my max-packet size:

+--------------------+---------+
| Variable_name      | Value   |
+--------------------+---------+
| max_allowed_packet | 1048576 |
+--------------------+---------+

Here is the file size:

$ ls -s file.sql 
79512 file.sql

When I try the other method...

$ ./mysql -u root -p my_db < file.sql
Enter password: 
ERROR 2006 (HY000) at line 1: MySQL server has gone away
Ben G
  • 26,091
  • 34
  • 103
  • 170
  • 3
    Just how big of a file is this? Is it possibly exceeding the max_allowed_packet setting? – Marc B May 06 '12 at 22:49
  • 1
    Ok, that's not it. Try pulling out individual queries from the file and running them yourself in the monitor. something in there's causing a crash/disconnected. – Marc B May 06 '12 at 22:55
  • The queries I randomly pull from the file work fine. I generated the SQL programmatically, and properly escaped everything. So I'm not sure what would cause an error if there is one. – Ben G May 06 '12 at 23:11
  • 1
    I too have same problem ... – maaz Mar 25 '13 at 17:57

24 Answers24

667
max_allowed_packet=64M

Adding this line into my.cnf file solves my problem.

This is useful when the columns have large values, which cause the issues, you can find the explanation here.

On Windows this file is located at: "C:\ProgramData\MySQL\MySQL Server 5.6"

On Linux (Ubuntu): /etc/mysql

Saty
  • 22,443
  • 7
  • 33
  • 51
Kurt Zhong
  • 7,308
  • 1
  • 19
  • 15
  • 3
    this solution solved the stated problem for me; nothing could be done via client-side only configuration/options, and I wasn't willing to go down a programmatic solution via PHP or other. – Richard Sitze Dec 16 '12 at 21:15
  • This solution worked for me on a Fedora 18 Dev system. Also after this I can insert the file via chive. A 10 MB Database. – Ranjith Siji May 27 '13 at 12:28
  • Solved my problem. However if anyone is missing a my.cnf as I was on OS X, what you should do is copy one from /usr/local/mysql/support-files/ to /etc/my.cnf and restart MySQL. – Space Jun 21 '13 at 22:25
  • 192
    You can also log into the database as root (or SUPER privilege) and do `set global max_allowed_packet=64*1024*1024;` - doesn't require a MySQL restart as well – razzed Jul 22 '13 at 23:45
  • Solved it for me too on OSX 10.8.4 MySQL 5.6.13! Thx! – Lawrence Aug 02 '13 at 10:55
  • 3
    This fixed it for me. my.cnf can be located in the /etc folder. – Sam Vloeberghs Nov 30 '13 at 11:47
  • On windows this file is located at: "C:\ProgramData\MySQL\MySQL Server 5.6", on Ubuntu /etc/mysql – Mike R Jun 09 '14 at 21:40
  • as @Zenexer says below in a comment to another answer, the documentation says you also need to tell the client to use a bigger packet size, something like: "mysql -h -u username -p --max_allowed_packet=1073741824 < db.sql" – Sean Dec 31 '14 at 08:37
  • 8
    You should be able to put this on the command line, which will avoid temporarily editing a system file:mysql --max_allowed_packet=1GM – Jan Steinman Feb 13 '15 at 07:45
  • @KarSho: in a shared host like GoDaddy, you cannot reconfigure the database server. You'll either need to talk to support, or see if it can be configured on a per-connection basis by issuing a MySQL-specific SQL command (see Zloy's answer, for example - you'd need to issue that in your initialisation code). – halfer Mar 30 '15 at 18:11
  • 1
    For WAMP users, the setting is here: click WAMP -> MySQL -> my.ini (about line 30) – BillyNair Apr 25 '15 at 07:12
  • 7
    For anyone looking for the location of the my.cnf file, you can check [this answer](http://stackoverflow.com/a/2485758/855838). Also do not forget to restart mysql by typing: `sudo service mysql restart`for the changes to the my.cnf file to take effect. – consuela May 14 '15 at 16:01
  • --max_allowed_packet=1GB is a better solution IMO. Thanks for that – Code Abominator Jul 30 '15 at 22:56
  • 1
    I used @razzed's answer with 1024*1024*1024 and it worked just fine. Thanks! – Nick Schroeder May 13 '16 at 18:33
  • 1
    @razzed, your solution worked, you should add seprate answer, it is really useful. – Milap May 27 '16 at 07:07
  • WAMP > mysql > my.ini, change as mentioned and restart the server – Andrew Oct 14 '16 at 12:27
  • the reason is not always the size like in my case – eyurdakul Feb 01 '17 at 10:37
  • I found the file at: C:\xampp\mysql\bin\my.ini – Marc Stober Dec 26 '17 at 21:55
  • 1
    If you are using Amazon RDS MySQL you'll need to create a parameters group and select the parameter giving it a suitable value. You then need to associate the instance with the new group. You'll also want to select to option have the update applied immediately if you want to retry your mysqldump import without waiting until the next maintenance window. (Explained here: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.html#USER_WorkingWithParamGroups.Modifying) – Nick Weavers May 11 '18 at 17:24
  • in case anyone misses, it has to be in the right group, in the config `[mysqld]` – pcarvalho Jan 11 '19 at 04:00
  • If you cannot find my.cnf, look for the my.ini file instead. In Windows, this is located at :\ProgramData\MySQL\MySQL Server 5.6\my.ini – MAbraham1 May 22 '19 at 04:33
  • When we update the max_allowed_packet, if the size of BLOB/TEXT data inserted in one transaction is greater than 10% of redo log size. That mean we need to Update innodb_log_file_size also. ```sh max_allowed_packet=64M innodb_log_file_size=64 ``` – cooljl31 Jul 11 '19 at 11:28
  • is it possible if connection leave idle mysql close that connection.. something like waiting_timeout ? – mujad Nov 27 '19 at 06:35
  • It is incredible but, even Xampp 8.1.17 still comes with this variable, in mariaDB settings, set as 1M – aldemarcalazans May 17 '23 at 19:23
190

You can increase Max Allowed Packet

SET GLOBAL max_allowed_packet=1073741824;

http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_max_allowed_packet

Nanhe Kumar
  • 15,498
  • 5
  • 79
  • 71
  • 3
    This worked for me, while the accepted answer did not. I'm guessing this answer's higher value is the root of the solution for me. – John B Jun 18 '14 at 14:48
  • I set max_allowed_packet=1024M in my.cnf – Csaba Toth Mar 12 '15 at 07:09
  • 1
    That does the server. You need to do that in the client, as well, like "mysql --max_allowed_packet=1073741824". – Jan Steinman Jan 23 '16 at 01:05
  • This worked for me. one question is "1073741824" in bytes – user2478236 Jul 25 '17 at 05:20
  • 1
    For some reason, Even tough setting this variable on my.cnf did show the change on a default configuration dump, running the mysql -sve "SHOW VARIABLES LIKE 'max_allowed_packet'" command did not show any change. Only the command on this answer changed the variable properly, and my import completed successfully. – Mudo Jun 26 '20 at 02:38
78

The global update and the my.cnf settings didn't work for me for some reason. Passing the max_allowed_packet value directly to the client worked here:

mysql -h <hostname> -u username -p --max_allowed_packet=1073741824 <databasename> < db.sql
Giacomo1968
  • 25,759
  • 11
  • 71
  • 103
Mario Peshev
  • 1,063
  • 8
  • 16
  • 4
    According to the MySQL website, both the marked answer and this should be used. – Zenexer May 08 '14 at 08:35
  • 2
    Don't forget to reload the config files or restart the server after changing these settings – Csaba Toth Mar 12 '15 at 07:09
  • 5
    Keep in mind using the `--max_allowed_packet` only affects the client. Consider modifying the mysql server (mysqld) as well by editing `max_allowed_packet` in the `/etc/my.cnf` file and restarting your mysql server. – luukvhoudt Aug 02 '16 at 14:25
  • Note that human friendly values of "50M" or "1G" work on the cli and in the my.cnf. https://dev.mysql.com/doc/refman/8.0/en/using-system-variables.html – txyoji Oct 28 '19 at 20:43
42

In general the error:

Error: 2006 (CR_SERVER_GONE_ERROR) - MySQL server has gone away

means that the client couldn't send a question to the server.


mysql import

In your specific case while importing the database file via mysql, this most likely mean that some of the queries in the SQL file are too large to import and they couldn't be executed on the server, therefore client fails on the first occurred error.

So you've the following possibilities:

  • Add force option (-f) for mysql to proceed and execute rest of the queries.

    This is useful if the database has some large queries related to cache which aren't relevant anyway.

  • Increase max_allowed_packet and wait_timeout in your server config (e.g. ~/.my.cnf).

  • Dump the database using --skip-extended-insert option to break down the large queries. Then import it again.

  • Try applying --max-allowed-packet option for mysql.


Common reasons

In general this error could mean several things, such as:

  • a query to the server is incorrect or too large,

    Solution: Increase max_allowed_packet variable.

    • Make sure the variable is under [mysqld] section, not [mysql].

    • Don't afraid to use large numbers for testing (like 1G).

    • Don't forget to restart the MySQL/MariaDB server.

    • Double check the value was set properly by:

      mysql -sve "SELECT @@max_allowed_packet" # or:
      mysql -sve "SHOW VARIABLES LIKE 'max_allowed_packet'"
      
  • You got a timeout from the TCP/IP connection on the client side.

    Solution: Increase wait_timeout variable.

  • You tried to run a query after the connection to the server has been closed.

    Solution: A logic error in the application should be corrected.

  • Host name lookups failed (e.g. DNS server issue), or server has been started with --skip-networking option.

    Another possibility is that your firewall blocks the MySQL port (e.g. 3306 by default).

  • The running thread has been killed, so retry again.

  • You have encountered a bug where the server died while executing the query.

  • A client running on a different host does not have the necessary privileges to connect.

  • And many more, so learn more at: B.5.2.9 MySQL server has gone away.


Debugging

Here are few expert-level debug ideas:

  • Check the logs, e.g.

    sudo tail -f $(mysql -Nse "SELECT @@GLOBAL.log_error")
    
  • Test your connection via mysql, telnet or ping functions (e.g. mysql_ping in PHP).

  • Use tcpdump to sniff the MySQL communication (won't work for socket connection), e.g.:

    sudo tcpdump -i lo0 -s 1500 -nl -w- port mysql | strings
    
  • On Linux, use strace. On BSD/Mac use dtrace/dtruss, e.g.

    sudo dtruss -a -fn mysqld 2>&1
    

    See: Getting started with DTracing MySQL

Learn more how to debug MySQL server or client at: 26.5 Debugging and Porting MySQL.

For reference, check the source code in sql-common/client.c file responsible for throwing the CR_SERVER_GONE_ERROR error for the client command.

MYSQL_TRACE(SEND_COMMAND, mysql, (command, header_length, arg_length, header, arg));
if (net_write_command(net,(uchar) command, header, header_length,
          arg, arg_length))
{
  set_mysql_error(mysql, CR_SERVER_GONE_ERROR, unknown_sqlstate);
  goto end;
}
kenorb
  • 155,785
  • 88
  • 678
  • 743
26

I solved the error ERROR 2006 (HY000) at line 97: MySQL server has gone away and successfully migrated a >5GB sql file by performing these two steps in order:

  1. Created /etc/my.cnf as others have recommended, with the following contents:

    [mysql]
    connect_timeout = 43200
    max_allowed_packet = 2048M
    net_buffer_length = 512M
    debug-info = TRUE
    
  2. Appending the flags --force --wait --reconnect to the command (i.e. mysql -u root -p -h localhost my_db < file.sql --verbose --force --wait --reconnect).

Important Note: It was necessary to perform both steps, because if I didn't bother making the changes to /etc/my.cnf file as well as appending those flags, some of the tables were missing after the import.

System used: OSX El Capitan 10.11.5; mysql Ver 14.14 Distrib 5.5.51 for osx10.8 (i386)

Giacomo1968
  • 25,759
  • 11
  • 71
  • 103
Luke Schoen
  • 4,129
  • 2
  • 27
  • 25
  • 2
    I'm getting the error even after following all the instructions. – Santosh Hegde Jul 29 '17 at 09:05
  • For those who are running this problem in a shared host an can not change the config file this solution works very well. – Felipe Costa Dec 11 '17 at 14:15
  • 1
    @SantoshHegde It might be too late but after you changing the `my.cnf`, you need to restart your mysql service. – Jason Liu Oct 16 '18 at 22:49
  • 2
    Just FYI `max_allowed_packet` can't be more than `1GB` or `1024M`, and on MySQL 5.7+ the `net_buffer_length` is autoscaled based on packet settings (don't set it). Lastly, `connect_timeout` shouldn't be related to this error, but other timeout settings might be, specifically `wait_timeout` and `interactive_timeout`... and if none of these tweaks work then try upgrading your server to more RAM memory ;) – Jesse Nickles Oct 24 '20 at 21:35
22

Just in case, to check variables you can use

$> mysqladmin variables -u user -p 

This will display the current variables, in this case max_allowed_packet, and as someone said in another answer you can set it temporarily with

mysql> SET GLOBAL max_allowed_packet=1072731894

In my case the cnf file was not taken into account and I don't know why, so the SET GLOBAL code really helped.

lesolorzanov
  • 3,536
  • 8
  • 35
  • 53
14

You can also log into the database as root (or SUPER privilege) and do

set global max_allowed_packet=64*1024*1024;

doesn't require a MySQL restart as well. Note that you should fix your my.cnf file as outlined in other solutions:

[mysqld]
max_allowed_packet=64M

And confirm the change after you've restarted MySQL:

show variables like 'max_allowed_packet';

You can use the command-line as well, but that may require updating the start/stop scripts which may not survive system updates and patches.

As requested, I'm adding my own answer here. Glad to see it works!

razzed
  • 2,653
  • 25
  • 27
13

The solution is increasing the values given the wait_timeout and the connect_timeout parameters in your options file, under the [mysqld] tag.

I had to recover a 400MB mysql backup and this worked for me (the values I've used below are a bit exaggerated, but you get the point):

[mysqld]
port=3306
explicit_defaults_for_timestamp = TRUE
connect_timeout = 1000000
net_write_timeout = 1000000
wait_timeout = 1000000
max_allowed_packet = 1024M
interactive_timeout = 1000000
net_buffer_length = 200M
net_read_timeout = 1000000
set GLOBAL delayed_insert_timeout=100000

Blockquote

RGA
  • 147
  • 1
  • 6
10

I had the same problem but changeing max_allowed_packet in the my.ini/my.cnf file under [mysqld] made the trick.

add a line

max_allowed_packet=500M

now restart the MySQL service once you are done.

Sathish D
  • 4,854
  • 31
  • 44
6

A couple things could be happening here;

  • Your INSERT is running long, and client is disconnecting. When it reconnects it's not selecting a database, hence the error. One option here is to run your batch file from the command line, and select the database in the arguments, like so;

$ mysql db_name < source.sql

  • Another is to run your command via php or some other language. After each long - running statement, you can close and re-open the connection, ensuring that you're connected at the start of each query.
Chris Henry
  • 11,914
  • 3
  • 30
  • 31
  • Another thing worth mentioning is that I get the error almost immediately after the `source` command – Ben G May 06 '12 at 23:14
  • If you get the error immediately after the source command, then it's likely MySQL doesn't like something about the query. Have you checked the general log? – Chris Henry May 06 '12 at 23:21
  • I have to figure out how to check the general log.. Im on MAMP and im not sure it writes it by default. – Ben G May 06 '12 at 23:36
  • I opted to just solve it with PHP querying and slicing it up. – Ben G May 07 '12 at 00:14
5

If you are on Mac and installed mysql through brew like me, the following worked.

  1. cp $(brew --prefix mysql)/support-files/my-default.cnf /usr/local/etc/my.cnf

Source: For homebrew mysql installs, where's my.cnf?

  1. add max_allowed_packet=1073741824 to /usr/local/etc/my.cnf

  2. mysql.server restart

Community
  • 1
  • 1
Rahul
  • 903
  • 8
  • 16
4

I had the same problem in XAMMP

Metode-01: I changed max_allowed_packet in the D:\xampp\mysql\bin\my.ini file like that below:

max_allowed_packet=500M

Finally restart the MySQL service once and done.

Metode-02:

the easier way if you are using XAMPP. Open the XAMPP control panel, and click on the config button in mysql section.
enter image description here

Now click on the my.ini and it will open in the editor. Update the max_allowed_packet to your required size.

enter image description here

Then restart the mysql service. Click on stop on the Mysql service click start again. Wait for a few minutes. enter image description here enter image description here

Then try to run your Mysql query again. Hope it will work.

Haron
  • 2,371
  • 20
  • 27
3

I encountered this error when I use Mysql Cluster, I do not know this question is from a cluster usage or not. As the error is exactly the same, so give my solution here. Getting this error because the data nodes suddenly crash. But when the nodes crash, you can still get the correct result using cmd:

ndb_mgm -e 'ALL REPORT MEMORYUSAGE'

And the mysqld also works correctly.So at first, I can not understand what is wrong. And about 5 mins later, ndb_mgm result shows no data node working. Then I realize the problem. So, try to restart all the data nodes, then the mysql server is back and everything is OK.

But one thing is weird to me, after I lost mysql server for some queries, when I use cmd like show tables, I can still get the return info like 33 rows in set (5.57 sec), but no table info is displayed.

zhihong
  • 1,808
  • 2
  • 24
  • 34
3

Add max_allowed_packet=64M to [mysqld]

[mysqld] 
max_allowed_packet=64M

Restart the MySQL server.

Hermann Schwarz
  • 1,495
  • 1
  • 15
  • 30
2

This error message also occurs when you created the SCHEMA with a different COLLATION than the one which is used in the dump. So, if the dump contains

CREATE TABLE `mytab` (
..
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;

you should also reflect this in the SCHEMA collation:

CREATE SCHEMA myschema COLLATE utf8_unicode_ci;

I had been using utf8mb4_general_ci in the schema, cause my script came from a fresh V8 installation, now loading a DB on old 5.7 crashed and drove me nearly crazy.

So, maybe this helps you saving some frustating hours... :-)

(MacOS 10.3, mysql 5.7)

1

If it's reconnecting and getting connection ID 2, the server has almost definitely just crashed.

Contact the server admin and get them to diagnose the problem. No non-malicious SQL should crash the server, and the output of mysqldump certainly should not.

It is probably the case that the server admin has made some big operational error such as assigning buffer sizes of greater than the architecture's address-space limits, or more than virtual memory capacity. The MySQL error-log will probably have some relevant information; they will be monitoring this if they are competent anyway.

MarkR
  • 62,604
  • 14
  • 116
  • 151
  • In other words, this error might be caused due to the server not having enough RAM memory to complete the import... upgrading should fix it. – Jesse Nickles Oct 24 '20 at 21:30
1

This is more of a rare issue but I have seen this if someone has copied the entire /var/lib/mysql directory as a way of migrating their DB to another server. The reason it doesn't work is because the database was running and using log files. It doesn't work sometimes if there are logs in /var/log/mysql. The solution is to copy the /var/log/mysql files as well.

Areeb Soo Yasir
  • 598
  • 7
  • 8
1

For amazon RDS (it's my case), you can change the max_allowed_packet parameter value to any numeric value in bytes that makes sense for the biggest data in any insert you may have (e.g.: if you have some 50mb blob values in your insert, set the max_allowed_packet to 64M = 67108864), in a new or existing parameter-group. Then apply that parameter-group to your MySQL instance (may require rebooting the instance).

SebaGra
  • 2,801
  • 2
  • 33
  • 43
  • If your working with Amazon RDS this works. You can't set global values in RDS so like SebaGra indicated, if you go and modify your DB's custom parameter group, find the `max_allowed_packet parameter` and set it to the appropriate size (or if you have really big blobs just set it to the max value 1073741824) it should work. – G_Style Aug 26 '19 at 19:20
  • Were you getting a consistent failure when loading the same dataset or completely random? asking because i have the same issue but its completely random, will fail 5 times and then work on the 5th. Also, moving to my local machine and running from visual studio seems to help but I'll still run into the error every once in a while. – BilliD Aug 27 '19 at 16:36
0

For Drupal 8 users looking for solution for DB import failure:

At end of sql dump file there can commands inserting data to "webprofiler" table. That's I guess some debug log file and is not really important for site to work so all this can be removed. I deleted all those inserts including LOCK TABLES and UNLOCK TABLES (and everything between). It's at very bottom of the sql file. Issue is described here:

https://www.drupal.org/project/devel/issues/2723437

But there is no solution for it beside truncating that table.

BTW I tried all solutions from answers above and nothing else helped.

MilanG
  • 6,994
  • 2
  • 35
  • 64
0

I've tried all of above solutions, all failed.

I ended up with using -h 127.0.0.1 instead of using default var/run/mysqld/mysqld.sock.

osexp2000
  • 2,910
  • 30
  • 29
0

If you have tried all these solutions, esp. increasing max_allowed_packet up to the maximum supported amount of 1GB and you are still seeing these errors, it might be that your server literally does not have enough free RAM memory available...

The solution = upgrade your server to more RAM memory, and try again.

Note: I'm surprised this simple solution has not been mentioned after 8+ years of discussion on this thread... sometimes we developers tend to overthink things.

Jesse Nickles
  • 1,435
  • 1
  • 17
  • 25
0

Eliminating the errors which triggered Warnings was the final solution for me. I also changed the max_allowed_packet which helped with smaller files with errors. Eliminating the errors also sped up the process incredibly.

  • "Eliminating the errors which triggered Warnings was the final solution for me" - what does that mean? I don't see any warnings in the given question – Nico Haase Dec 06 '21 at 13:41
  • @Basil "Eliminating errors" is always a solution to something. Please be more precise and include details when answering. – Anse Dec 06 '21 at 16:13
  • This does not provide an answer to the question. Once you have sufficient [reputation](https://stackoverflow.com/help/whats-reputation) you will be able to [comment on any post](https://stackoverflow.com/help/privileges/comment); instead, [provide answers that don't require clarification from the asker](https://meta.stackexchange.com/questions/214173/why-do-i-need-50-reputation-to-comment-what-can-i-do-instead). - [From Review](/review/late-answers/30518997) – MD. RAKIB HASAN Dec 07 '21 at 13:19
-1

if none of this answers solves you the problem, I solved it by removing the tables and creating them again automatically in this way:

when creating the backup, first backup structure and be sure of add:
DROP TABLE / VIEW / PROCEDURE / FUNCTION / EVENT
CREATE PROCEDURE / FUNCTION / EVENT
IF NOT EXISTS
AUTO_INCREMENT

then just use this backup with your db and it will remove and recreate the tables you need.

Then you backup just data, and do the same, and it will work.

Pathik Vejani
  • 4,263
  • 8
  • 57
  • 98
El Abogato
  • 123
  • 1
  • 5
-4

How about using the mysql client like this:

mysql -h <hostname> -u username -p <databasename> < file.sql
ErJab
  • 6,056
  • 10
  • 42
  • 54