420

I got the Error Code: 2013. Lost connection to MySQL server during query error when I tried to add an index to a table using MySQL Workbench. I noticed also that it appears whenever I run long query.

Is there away to increase the timeout value?

Nimeshka Srimal
  • 8,012
  • 5
  • 42
  • 57
user836026
  • 10,608
  • 15
  • 73
  • 129

37 Answers37

764

New versions of MySQL WorkBench have an option to change specific timeouts.

For me it was under Edit → Preferences → SQL Editor → DBMS connection read time out (in seconds): 600

Changed the value to 6000.

Also unchecked limit rows as putting a limit in every time I want to search the whole data set gets tiresome.

Marko
  • 20,385
  • 13
  • 48
  • 64
eric william nord
  • 7,696
  • 2
  • 13
  • 3
  • 2
    Is it possible to increase this limit over 99,999 seconds? The `DBMS connection read time out` field only accept up to 5 figures, and setting the field to 0 is equivalent to the default parameter (600 seconds). (Windows 7 64-bit Ultimate, MySQL Workbench 5.2.47 CE) – Franck Dernoncourt Jun 01 '13 at 21:30
  • 2
    Following http://stackoverflow.com/q/16877574/395857, this issue is now solved (http://bugs.mysql.com/bug.php?id=69395) – Franck Dernoncourt Jun 18 '13 at 03:46
  • 5
    uncheck limit rows in in Edit → Preferences →SQL Queries – Jon Mar 02 '14 at 16:32
  • 1
    I tried this and it now shows Error 0 with no message when I right-click a table and click Select Rows. I guess I don't understand why 600 seconds isn't long enough for read time out when I do the search seconds after logging into the database. – Davicus May 23 '14 at 01:18
  • 14
    After restarting, it is showing Error 2013 again even with read time out set to 6000, so this doesn't appear to be a solution. – Davicus May 23 '14 at 19:09
  • 17
    Remember to restart Workbench AND to close all open query windows first! – pim Jun 25 '17 at 15:35
  • In the latest version, the setting for timeout is located at MySQLWorkBench/Preference/Others – tzatalin Apr 02 '18 at 16:01
  • 1
    This isn't working for me, but I only have the issue when connected over ssh. – avatarofhope2 Apr 16 '19 at 17:37
  • Setting a high value didn't work for me. So what I did was... Instead of arbitrarily setting a high value, I just set it to zero, then it skipped the read timeout. If you go this way, please remember to put a value back again, just in case. – newbie Apr 17 '19 at 12:42
  • For anyone using Querious, this solution helped me find it: Preferences -> Advanced -> Read Timeout. Also, I had to restart the querious app for the changes to start – Julian Jul 16 '19 at 16:21
  • @Julian Couldn't find ' Advanced' in Preferences , I assume you are not using the latest version of MySQLWorkbench? MySQLWorkBench/Preference/Others works for me though. – wawawa Jan 02 '20 at 13:29
  • @Cecilia no I'm using a different sql client called Querious, my comment was for people using that app :) – Julian Jan 02 '20 at 17:23
  • @Julian Ah makes sense then, thanks for letting me know:) – wawawa Jan 03 '20 at 09:36
  • FWIW, `0` works but I had to reset it (already had it as 0, set it to 1200 and then back to 0, and reopen the connection on each change). I guess in a workbench upgrade it lost track of the value it was showing and actually using. – Mauricio Morales Aug 24 '20 at 16:43
  • Mac users press "cmd + ," to see preferences – adarsh srivastava Mar 10 '22 at 13:40
57

If your query has blob data, this issue can be fixed by applying a my.ini change as proposed in this answer:

[mysqld]
max_allowed_packet=16M

By default, this will be 1M (the allowed maximum value is 1024M). If the supplied value is not a multiple of 1024K, it will automatically be rounded to the nearest multiple of 1024K.

While the referenced thread is about the MySQL error 2006, setting the max_allowed_packet from 1M to 16M did fix the 2013 error that showed up for me when running a long query.

For WAMP users: you'll find the flag in the [wampmysqld] section.

Community
  • 1
  • 1
Harti
  • 1,480
  • 12
  • 16
  • This was exactly my issue. I was importing a database backup from a file and MySQL Workbench was reporting this 2013 error followed by "Operation failed with exitcode 1". It turns out the backup had large blob columns exceeding MySQL's default max_allowed_packet size of 4M. Increasing this fixed it. (MySQL 5.6 and Workbench 6.2.3). Thanks! – zAlbee Oct 27 '14 at 17:36
  • This was the fix for me too. Although I set it to 256M for a Windows machine. – smoore4 Jun 12 '15 at 11:49
  • Had 16M, got that error with an import file multiple time, changed to 32M and it then worked. – hakre Feb 26 '16 at 09:34
  • It is incredible but, even Xampp 8.1.17 still comes with this variable, in mariaDB settings, set as 1M – aldemarcalazans May 17 '23 at 19:25
42

Start the DB server with the comandline option net_read_timeout / wait_timeout and a suitable value (in seconds) - for example: --net_read_timeout=100.

For reference see here and here.

Yahia
  • 69,653
  • 9
  • 115
  • 144
  • 6
    How do I provide this parameter in command line? When I am trying to connect to DB: mysql -u root -p --net_read_timeout=60 or when I am trying to start the service? sudo service mysql start ? At both places it is giving error: unknown variable 'net_read_timeout' – Coder Nov 06 '15 at 19:08
  • @VikasGoel It is a server side parameter. I.e. `mysqld`. – Chloe Jul 05 '19 at 20:38
28
SET @@local.net_read_timeout=360;

Warning: The following will not work when you are applying it in remote connection:

SET @@global.net_read_timeout=360;

Edit: 360 is the number of seconds

Vince V.
  • 3,115
  • 3
  • 30
  • 45
user1313024
  • 289
  • 4
  • 3
21

Add the following into /etc/mysql/my.cnf file:

innodb_buffer_pool_size = 64M

example:

key_buffer              = 16M
max_allowed_packet      = 16M
thread_stack            = 192K
thread_cache_size       = 8
innodb_buffer_pool_size = 64M
Umair Ayub
  • 19,358
  • 14
  • 72
  • 146
MysqlMan
  • 227
  • 2
  • 2
16

In my case, setting the connection timeout interval to 6000 or something higher didn't work.

I just did what the workbench says I can do.

The maximum amount of time the query can take to return data from the DBMS.Set 0 to skip the read timeout.

On Mac Preferences -> SQL Editor -> Go to MySQL Session -> set connection read timeout interval to 0.

And it works

Thet Htun
  • 471
  • 5
  • 13
12

There are three likely causes for this error message

  1. Usually it indicates network connectivity trouble and you should check the condition of your network if this error occurs frequently
  2. Sometimes the “during query” form happens when millions of rows are being sent as part of one or more queries.
  3. More rarely, it can happen when the client is attempting the initial connection to the server

For more detail read >>

Cause 2 :

SET GLOBAL interactive_timeout=60;

from its default of 30 seconds to 60 seconds or longer

Cause 3 :

SET GLOBAL connect_timeout=60;
Nanhe Kumar
  • 15,498
  • 5
  • 79
  • 71
  • 2 gives me this error - Code: 1227. Access denied; you need (at least one of) the SUPER privilege(s) for this operation – MasterJoe Nov 08 '19 at 07:17
10

You should set the 'interactive_timeout' and 'wait_timeout' properties in the mysql config file to the values you need.

Maksym Polshcha
  • 18,030
  • 8
  • 52
  • 77
  • This does help me. 'interactive_timeout' in my.cnf was set to 100, that's too short. after I changed it to 3600 s(or any value big enough for you ),problem resolved.Thx – CobraBJ Jan 06 '15 at 03:51
10

On the basis of what I have understood this error was caused due to read timeout and max allowed packet default is 4M. if your query file more than 4Mb then you get an error. this worked for me

  1. change the read timeout. For changing go to Workbench Edit → Preferences → SQL Editor enter image description here

2. change the max_allowed_packet manually by editing the file my.ini. for editing go to "C:\ProgramData\MySQL\MySQL Server 8.0\my.ini". The folder ProgramData folder is hidden so if you did not see then select show hidden file in view settings. set the max_allowed_packet = 16M in my.ini file. 3. Restart MySQL. for restarting go to win+ R -> services.msc and restart MySQL.

Avinash
  • 359
  • 3
  • 5
9

If you experience this problem during the restore of a big dump-file and can rule out the problem that it has anything to do with network (e.g. execution on localhost) than my solution could be helpful.

My mysqldump held at least one INSERT that was too big for mysql to compute. You can view this variable by typing show variables like "net_buffer_length"; inside your mysql-cli. You have three possibilities:

  • increase net_buffer_length inside mysql -> this would need a server restart
  • create dump with --skip-extended-insert, per insert one line is used -> although these dumps are much nicer to read this is not suitable for big dumps > 1GB because it tends to be very slow
  • create dump with extended inserts (which is the default) but limit the net-buffer_length e.g. with --net-buffer_length NR_OF_BYTES where NR_OF_BYTES is smaller than the server's net_buffer_length -> I think this is the best solution, although slower no server restart is needed.

I used following mysqldump command: mysqldump --skip-comments --set-charset --default-character-set=utf8 --single-transaction --net-buffer_length 4096 DBX > dumpfile

Matt V
  • 113
  • 1
  • 5
7

Just perform a MySQL upgrade that will re-build innoDB engine along with rebuilding of many tables required for proper functioning of MySQL such as performance_schema, information_schema, etc.

Issue the below command from your shell:

sudo mysql_upgrade -u root -p
Jamal
  • 763
  • 7
  • 22
  • 32
Shoaib Khan
  • 899
  • 14
  • 26
  • The error didn't present itself until MySQL Workbench 6.1.4 (and only after a while) and happens on 6.1.6 as well (though only after a few uses) so I'm not sure how rebuilding multiple servers is a fix for a problem that has only presented itself on one GUI just recently. – Davicus May 23 '14 at 19:10
  • This fixed my issue. I had just used Ansible to setup the database over an existing one, and things went haywire. Running this command restored everything to working order. – Jubz Nov 29 '16 at 17:40
7

Sometimes your SQL-Server gets into deadlocks, I've been in to this problem like 100 times. You can either restart your computer/laptop to restart server (easy way) OR you can go to task-manager>services>YOUR-SERVER-NAME(for me , it was MySQL785 something like this). And right-click > restart. Try executing query again.

oshin pojta
  • 89
  • 1
  • 6
6

I know its old but on mac

1. Control-click your connection and choose Connection Properties.
2. Under Advanced tab, set the Socket Timeout (sec) to a larger value.
Aamir Mahmood
  • 2,704
  • 3
  • 27
  • 47
  • 1
    Thanks It Worked ! Complete Path : MYSQLWORKBENCH->Database->Manage Connections->Advanced tab, set the Socket Timeout (sec) to a larger value. – nimey sara thomas Jul 29 '20 at 08:01
5

Change "read time out" time in Edit->Preferences->SQL editor->MySQL session

user6234739
  • 51
  • 1
  • 1
4

Try please to uncheck limit rows in in Edit → Preferences →SQL Queries

because You should set the 'interactive_timeout' and 'wait_timeout' properties in the mysql config file to the values you need.

user2586714
  • 149
  • 1
  • 1
  • 7
2

I got the same issue when loading a .csv file. Converted the file to .sql.

Using below command I manage to work around this issue.

mysql -u <user> -p -D <DB name> < file.sql

Hope this would help.

VinRocka
  • 299
  • 4
  • 15
2

If all the other solutions here fail - check your syslog (/var/log/syslog or similar) to see if your server is running out of memory during the query.

Had this issue when innodb_buffer_pool_size was set too close to physical memory without a swapfile configured. MySQL recommends for a database specific server setting innodb_buffer_pool_size at a max of around 80% of physical memory, I had it set to around 90%, the kernel was killing the mysql process. Moved innodb_buffer_pool_size back down to around 80% and that fixed the issue.

A_funs
  • 1,228
  • 2
  • 19
  • 31
2

Go to Workbench Edit → Preferences → SQL Editor → DBMS connections read time out : Up to 3000. The error no longer occurred.

Kairat Koibagarov
  • 1,385
  • 15
  • 9
1

I faced this same issue. I believe it happens when you have foreign keys to larger tables (which takes time).

I tried to run the create table statement again without the foreign key declarations and found it worked.

Then after creating the table, I added the foreign key constrains using ALTER TABLE query.

Hope this will help someone.

Nimeshka Srimal
  • 8,012
  • 5
  • 42
  • 57
1

This happened to me because my innodb_buffer_pool_size was set to be larger than the RAM size available on the server. Things were getting interrupted because of this and it issues this error. The fix is to update my.cnf with the correct setting for innodb_buffer_pool_size.

1

Go to:

Edit -> Preferences -> SQL Editor

In there you can see three fields in the "MySQL Session" group, where you can now set the new connection intervals (in seconds).

Max
  • 11
  • 2
0

Turns out our firewall rule was blocking my connection to MYSQL. After the firewall policy is lifted to allow the connection i was able to import the schema successfully.

wuro
  • 1
0

I had the same problem - but for me the solution was a DB user with too strict permissions. I had to allow the Execute ability on the mysql table. After allowing that I had no dropping connections anymore

naabster
  • 1,494
  • 12
  • 14
0

Check if the indexes are in place first.

SELECT *
FROM INFORMATION_SCHEMA.STATISTICS
WHERE TABLE_SCHEMA = '<schema>'
Gayan Dasanayake
  • 1,933
  • 2
  • 17
  • 22
0

I ran into this while running a stored proc- which was creating lots of rows into a table in the database. I could see the error come right after the time crossed the 30 sec boundary.

I tried all the suggestions in the other answers. I am sure some of it helped , however- what really made it work for me was switching to SequelPro from Workbench.

I am guessing it was some client side connection that I could not spot in Workbench. Maybe this will help someone else as well ?

RN.
  • 997
  • 4
  • 14
  • 31
0

If you are using SQL Work Bench, you can try using Indexing, by adding an index to your tables, to add an index, click on the wrench(spanner) symbol on the table, it should open up the setup for the table, below, click on the index view, type an index name and set the type to index, In the index columns, select the primary column in your table.

Do the same step for other primary keys on other tables.

Matthew E
  • 625
  • 6
  • 6
0

There seems to be an answer missing here for those using SSH to connect to their MySQL database. You need to check two places not 1 as suggested by other answers:

Workbench Edit → Preferences → SQL Editor → DBMS

Workbench Edit → Preferences → SSH → Timeouts

My default SSH Timeouts were set very low and causing some (but apparently not all) of my timeout issues. After, don't forget to restart MySQL Workbench!

Last, it may be worth contacting your DB Admin and asking them to increase wait_timeout & interactive_timeout properties in mysql itself via my.conf + mysql restart or doing a global set if restarting mysql is not an option.

Hope this helps!

NekoKikoushi
  • 496
  • 5
  • 16
0

Three things to be followed and make sure:

  1. Whether multiple queries show lost connection?
  2. how you use set query in MySQL?
  3. how delete + update query simultaneously?

Answers:

  1. Always try to remove definer as MySQL creates its own definer and if multiple tables involved for updation try to make a single query as sometimes multiple query shows lost connection
  2. Always SET value at the top but after DELETE if its condition doesn't involve SET value.
  3. Use DELETE FIRST THEN UPDATE IF BOTH OF THEM OPERATIONS ARE PERFORMED ON DIFFERENT TABLES
RalfFriedl
  • 1,134
  • 3
  • 11
  • 12
0

I had this error message due to a problem after of upgrade Mysql. The error appeared immediately after I tried to do any query

Check mysql error log files in path /var/log/mysql (linux)

In my case reassigning Mysql owner to the Mysql system folder worked for me

chown -R mysql:mysql /var/lib/mysql
0

Establish connection first mysql --host=host.com --port=3306 -u username -p then select your db use dbname then source dumb source C:\dumpfile.sql. After it's done \q

Swaleh Matongwa
  • 698
  • 9
  • 16
0

My observation -

when you run MySQL Workbench and terminal together and in terminal you do -

SET AUTOCOMMIT = 0;

OR

START TRANSACTION;

Then you usually face this kind of problem.

And even after -

SET AUTOCOMMIT = 1;

OR

COMMIT;

The problems persists.

You need to logout from both terminal and MYSQL workbench and then login again or else do a reboot.

Payel Senapati
  • 1,134
  • 1
  • 11
  • 27
0

Using MySql Server 8.0.28 Community Edition. 100+ tables created in my schemas. MySql Server does not work correctly and many times crashing. I found issues and solutions.

I wrote "select count(*) from table1..table100" queries and run in .sql file.

Issue 1 : MySql Server setup for utf8mb4 and my tables created with utf8.

Solution 1 : You must set your tables and columns or recreate with utf8mb4.

Issue 2 : The back_log parameter need reconfigure.

Solution 2 : set back_log=50+(coonectioncount/5)

Issue 3 : wait_timeout parameter need reconfigure.

Solution 3 : You must set it 180+ (you can try upper values)

This actions solved my problems.

gelistirici
  • 419
  • 5
  • 8
0

In case of me nothing above worked. Then I tried stop the mysql service and started it again and magically query started running :D .

$ sudo service mysql stop
$ sudo service mysql start

Hope this helps

luffy
  • 176
  • 1
  • 4
0

Stop any running an application using the same database. This way workbench will change everything fine.

-1

check about

OOM on /var/log/messages ,
modify innodb_buffer_pool_size value ; when load data , use 50% of os mem ; 

Hope this helps

Sanoop Surendran
  • 3,484
  • 4
  • 28
  • 49
-1

This usually means that you have "incompatibilities with the current version of MySQL Server", see mysql_upgrade. I ran into this same issue and simply had to run:

mysql_upgrade --password The documentation states that, "mysql_upgrade should be executed each time you upgrade MySQL".

-2

Try to apply LIMIT.

Case with me

Initially, the query was

SELECT * FROM TABLE1;

This was giving Error Code: 2013. Lost connection to MySQL server during query

Solution

After applying LIMIT

SELECT * FROM TABLE1 LIMIT 10;

it worked. And when again tried the query w/o LIMIT, that also worked.

Note: If this worked for you and you figure out why this worked, do comment.

Namaste

Deepam Gupta
  • 2,374
  • 1
  • 30
  • 33
  • I am also getting error enter code here`ERROR 2013 (HY000) at line 752: Lost connection to MySQL server during query Then I read the answer by @deepam Gupta where he applied the limit, then I searched in the query near 752 line of, where I found some garbage data (limited to my case), I delete that data. Adding the read timeout didn't work in my case. – Pradeep May 31 '22 at 10:08
  • Comments are not meant for "thank you" or "this works" messages. This is what upvotes do, see https://meta.stackexchange.com/questions/19756/how-do-comments-work I would suggest you remove the last two lines from your answer – FlyingTeller Jun 02 '23 at 08:49