13

I have a script below that goes through 380 MySQL innodb databases and runs various create table, inserts, updates...etc to migrate schema. It runs from a web server that connects to a cloud database server. I am leaving the migration script out of this question as I don't think it is relevant.

I ran into an issue and I am trying to find a workaround.

I have a 4gb ram cloud database server running MySQL 5.6. I migrated 380 database with 40 tables to 59 tables. About 70% of the way through I got The errors below. It died in the middle of one migration and the server went down. I was watching memory usage and it ran out of memory. It is a database as a service so I don't have root access to server so I don't know all details.

Running queries on phppoint_smg


Warning: Using a password on the command line interface can be insecure.
ERROR 2013 (HY000) at line 355: Lost connection to MySQL server during query

Running queries on phppoint_soulofhalloween


Warning: Using a password on the command line interface can be insecure.
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0

Running queries on phppoint_srvais


Warning: Using a password on the command line interface can be insecure.
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0

Here is a simplified version of the PHP script.

db_host = escapeshellarg($db_host);
$db_user = escapeshellarg($db_user);
$db_password = escapeshellarg($db_password);
foreach($databases as $database)
{
    echo "Running queries on $database\n***********************************\n";
    system("mysql --host=$db_host --user=$db_user --password=$db_password --port=3306 $database < ../update.sql"); 
    echo "\n\n";
}

My questions:

Is there any way to avoid memory usage going up as I do migration? I am doing it one database at a time. Or is the addition of tables and data the reason it goes up?

I was able to use the server afterwords and removed 80 databases and finished the migration. It has 800 mb free; and I expect it to go down to 600mb. Before the migration it was at 500mb

Diogo Rocha
  • 9,759
  • 4
  • 48
  • 52
Chris Muench
  • 17,444
  • 70
  • 209
  • 362
  • As there seems to be no time constraint: Am I correct in assuming this is not a production server? – DaSourcerer May 08 '16 at 19:37
  • I am not sure what you mean by time constraint? I was able to fix the issue after the error by finishing the script with more memory available. But I want to solve this for future. – Chris Muench May 08 '16 at 20:22
  • I meant that in the sense of "the script does not have to be finished in a set amount of time." It is also not entirely clear to me whether a service outage is ok or not. It would also help to know whet type of migrations you are running here. If there is only little data manipulation necessary, this may be possible without PHP at all. By the way: Are you running your script from the shell? – DaSourcerer May 08 '16 at 20:38
  • I don't care how long it takes to run; as I have my system setup in a way where that I can run 2 versions of code. So if the script takes longer to process each database one by one thats fine. I can't have much downtime which is why I have 2 versions of code (before schema migration + after). The migration https://gist.github.com/blasto333/b6ae212929674508bfeec338b992de52 – Chris Muench May 08 '16 at 21:17
  • What Engine are you using? How much RAM do you have? `SHOW VARIABLES LIKE "%buffer%";` Possibly you have configured MySQL in a bad way. – Rick James May 10 '16 at 02:45
  • Innodb here is config. I am thinking of cutting the innodb buffer pool in half. https://gist.github.com/blasto333/aa4241a4e37447961188356719ea6984 – Chris Muench May 10 '16 at 03:43
  • 4gb of ram is what I have – Chris Muench May 10 '16 at 04:15
  • As I am preparing my own answer: What role precisely does the webserver have? And how much access to the database server do you really have beyond the ability to change the setting of MySQL? Do you have shell (ssh) access and can you run your own scripts? – DaSourcerer May 12 '16 at 19:42
  • I actually have a feeling this is not related to memory issues [at all](http://stackoverflow.com/a/22477717/3012385). – DaSourcerer May 14 '16 at 20:50

7 Answers7

2

It's pretty obvious that your migration SQL queries kill the server. Seems that the database simply have to low free RAM for such actions. Depending on the database filesize and queries it can for sure boost up your RAM usage. Without knowing exact server specs, the data in the database and the queries you fire there is no exact answer that can help you here.

Brain Foo Long
  • 2,057
  • 23
  • 28
  • Ok thanks. I have about 17gb worth of database data. I think I might have to upgrade soon. Do you think if I did the migrations cold or did them in staggered groups it would help at all? This was an unusually large migration. – Chris Muench May 09 '16 at 03:25
  • One thing I don't understand is I run them one by one but the memory goes down linearly. I would think after migration of one database it would gain some memory back. – Chris Muench May 09 '16 at 13:27
  • I don't even know the exact mechanics of the DBs but all that sounds reasonable that memory goes down after more statements. And if it is (mis)configured to hold (too) many information in RAM than you see such characteristics. It all depends on so many things, can't get deeper in here because of lack of information, sorry. Only your server admin can really help here. – Brain Foo Long May 09 '16 at 18:25
  • Ok; we are going to try to change some of the buffers and cache settings to use less RAM – Chris Muench May 09 '16 at 18:54
  • "It's pretty obvious..." NO! It's unlikely to be the SQL queries that are killing the server. They are merely prompting the DBMS to cache in on the memory it is promised by its config - memory which the OS cannot supply (or the server has some faulty RAM). – symcbean May 14 '16 at 18:01
  • @symcbean I've never said it's a common behaviour. My answer depend specially on this case because it's the only thing here that do, obviously, kill the server. Sorry if the answer confuses a bit. – Brain Foo Long May 14 '16 at 18:47
2

Instead of spawning lots of processes, generate one file, then run it. Generate the file something like

$out = fopen('tmp_script.sql', 'w');
foreach($databases as $database)
{
    fwrite($out, "USE $database;\n");
    fwrite($out, "source ../update.sql;\n");
}
fclose($out);

Then, either manually or programatically, do

mysql ... < tmp_script.sql

It might be safer to do it manually so that PHP is out of the way.

Rick James
  • 135,179
  • 13
  • 127
  • 222
  • I am not sure php has anything to do with it as the web server is fine; it is the database server that has issue. Unless you are saying that the mysql processes have some sort of effect? – Chris Muench May 10 '16 at 02:09
  • 1
    I can't tell what is causing the problem. What I suggested will take PHP out of the picture and speed up the whole process. – Rick James May 10 '16 at 02:44
2

Your PHP sample doesn't use much memory, and it's not running on the Database server, which is the one that went down, right? So the problem is in your configured MySQL parameters.

Based on your Gist, and using a simple MySQL memory calculator, we can see that your MySQL service can use up to 3817MB of memory. If the server only had 4GB when the error happened, it's pretty probable that this was the cause (you need to have some additional memory for the OS and running applications). Increasing the memory or finetuning the server would resolve it. Take a look at the MySQL documentation page on server variables to best understand each value.

However, this may not be the only cause for disconnects/timeouts (but it does seem to be your case, since increasing memory resolved the problem). Another common problem is to underestimate the max_allowed_packet value (16MB in your configuration), because such scripts can easily have queries beyond this value (for example, if you have several values for a single INSERT INTO table ...).

Consider that max_allowed_packet should be bigger than the biggest command you issue to your database (it's not the SQL file, but each command within it, the block between ;).

But please, do consider a more careful tuning, since a bad configured server may suddenly crash, or become unresponsive —while it could perfectly run without adding more memory. I suggest running performance tuning scripts like MySQLTuner-perl that will analyze your data, index usage, slow queries and even propose the adjustments you need to optimize your server.

Capilé
  • 2,038
  • 17
  • 14
  • I got mysql tuner to say maximum usage is 3400 MB. is that good? I have updated the gist to have the latest settings. Also here is the mysql tuner results https://gist.github.com/blasto333/669717ceec7e2d9f314967f7fa525b0d – Chris Muench May 16 '16 at 19:56
  • Also here is the latest my.cnf settings https://gist.github.com/blasto333/aa4241a4e37447961188356719ea6984 – Chris Muench May 16 '16 at 21:15
  • Read the mysqltuner.pl General recommendations, it's still saying it's too much memory: `Total buffers: 2.3G global + 2.7M per thread (410 max threads)`. I advise reviewing the `max_connections` until a safe value, and also optimize the other values suggested — note that the mysqltuner uses your data, cache and statistics to propose adjustments, so you may want to review it after it's been in production for a while. – Capilé May 16 '16 at 21:36
  • I think 3.4GB in a 4GB system a bit risky... You may also take in consideration the maximum possible clients that can connect to the server — for example, if you have a limit of 100 PHP web server processes, having 410 max connections available at the database is not recommended. – Capilé May 16 '16 at 21:39
1

One thing you should try to relieve your RAM, as your server is obviously extremely low on RAM, is to force garbage collection after unsetting big arrays once the loop is complete.

I was facing a similar problem with PTHREADS under PHP7 (and 512Go of RAM) that was handling 1024 async connections to MariaDB and Postgresql on a massive server.

Try this for each loop.

//first unset main immediately at loop start:
unset($databases[$key]);

// second unset process and purge immediately
unset($database);
gc_collect_cycles();

Also, set a control to constantly monitor the RAM usage under load to see if this happens on a particular $database. In case your RAM goes too low , set the control to chunk your $database and do multi inserts batches and unset them as they are done. This will purge more RAM and avoid too big array copies before sub inserts loop. This is especially the case if you are using classes with construct. With 4Go, I would tend to set batches of 400 to 500 async inserts max, depends on your insert global length.

cpugourou
  • 775
  • 7
  • 11
  • 1
    This is a valuable suggestion when the memory exhaustion is PHP side, and/because the MySQL server runs on the same host. From the OP's question it seems to me that this is not the case, though. – LSerni May 11 '16 at 05:48
1

If your database server is crashing (or being killed by the oom killer) then the reason is that it has been configured to use more memory than is available on the device.

You forgot to tell us what OS is running on the database nodes.

If you don't have root access to the server then this is the fault of whoever configured it. Memory overcommit should be disabled (requires root access). A tool like mysqltuner will show you how much memory the DBMS is configured to use (requires admin privilege). See also this percona post

Community
  • 1
  • 1
symcbean
  • 47,736
  • 6
  • 59
  • 94
  • It is a Database as a service through a well known provider and am about 99.9999% sure it is some variant of linux (Cent os / Redhat). I will ask them about memory over commit and see what they can say about the tool. – Chris Muench May 14 '16 at 19:11
  • I included the details about possible causes for the benefit of anyone having a similar problem who does have admin access to the server. If you are paying for database-as-service, then I am skeptical of your assertion that the database is crashing. Are you sure the database is crashing and it's not just your session which is failing? Regardless, part of the service you are paying for includes support for issues like this. It may simply be that the server is configured with a very low timeout / mysql packet size. – symcbean May 14 '16 at 20:44
1

I think they are right about ram, but it is worth noting the tools you use is important.

Have you tried http://www.mysqldumper.net/ if you use it (php script) check the settings for php memory limit and let it auto detect.

I used to use http://www.ozerov.de/bigdump/ but its so slow that I dont anymore.

The mysqldumper on the otherhand, is both fast at backups and restores, doesnt crash (if you set memory limit)

I have found this tool to be exceptional.

iGNEOS
  • 194
  • 8
-1

Updated:

Your comments completely changes the situation.
Here is my updated answer:

Since you have no access to MySQL server, you need to do some alternative approach.

Mandatory remove all special "things" from import file such enclosing transactions, insert delayed / ignored and so on.

Mandatory do SQL's with single statement - I do not know how inserts look like, but do it single statement - single insert - do not bundle many rows in single statement,

e.g.

instead of

insert into x
             (...fields...)values(...single row...),
             (...fields...)values(...single row...), 
             (...fields...)values(...single row...), 
             (...fields...)values(...single row...)
;

do

insert into x(...fields...)values(...single row...); 
insert into x(...fields...)values(...single row...); 
insert into x(...fields...)values(...single row...); 
insert into x(...fields...)values(...single row...); 

Then try these:

  • You might try to "upload" my.ini with big buffers and so on. It is possible the provider of MySQL server to give you more RAM then. It is service after all :)

  • You might try to generate file with schema and files with data. Then import schema, then began to import table by table and see where it crashes and resume crashed file.

  • You might import everything with MyISAM tables. Then you can convert these in InnoDB. alter table x engine=innodb. However, doing so will lost all referential integrity and you will need to enforce it later.

  • You might import everything with MyISAM tables. Then instead of convert those, you can do

something like this for each table:

alter table x rename to x_myisam;
create table x(...);
insert into x select * from x_myisam;

I believe there is single table that breaks the process. If you find it, you can proceed with it manually. For example import 10000 rows at a time or something.

Alternative approach

If your server is in Amazon AWS or similar service, you can try "scale-out" ("enlarge") the server for the import, and to "scale-down" ("shrink") after import is done.

Old answer

why do you use php script? try create or generate via php a shell script. then run shell script.

also is very important to create huge swap file on the system. here is one way to do it. It might not work on older systems:

sudo su # became root
cd /somewhere
fallocate -l 16gb file001
mkswap file001
chmod 0 file001
swapon file001

Then execute the php or shell script.

Once is done, you can swapoff and remove the file or make it permanent in fstab.

Let me know if i need to clarify anything.

Nick
  • 9,962
  • 4
  • 42
  • 80
  • There is not a memory usage problem on web server; just on database server. It is a database as a service with no swap space and I can only adjust my.cnf settings. I need a php script as there is a lot of logic to determine which databases to upgrade. – Chris Muench May 08 '16 at 17:13
  • oh I read that you have no access on db server. well then, you can not do anything. Probably you will be able to import as myisam, and change to innodb later using alter table xxx engine=innodb. this will break any referencial integrity you might have, but this is the only thing you can do. is this AWS? can you buy bigger instance for the load of data only? – Nick May 09 '16 at 06:12
  • I will scale up just for import or figure out a way to reduce memory usage another way. – Chris Muench May 09 '16 at 13:25
  • I am curious if it will be OK - let me know. – Nick May 09 '16 at 15:13