3

I have a laravel application which must insert/update thousands of records per second in a for loop. my problem is that my Database insert/update rate is 100-150 writes per second . I have increased the amount of RAM dedicated to my database but got no luck.

enter image description here

is there any way to increase the write rate for mysql to thousands of records per second ?

please provide me optimum configurations for performance tuning

and PLEASE do not down mark the question . my code is correct . Its not a code problem because I have no problem with MONGODB . but I have to use mysql .

My Storage Engine is InnoDB

Ramin Omrani
  • 3,673
  • 8
  • 34
  • 60
  • 1
    https://dev.mysql.com/doc/refman/5.7/en/insert-optimization.html https://stackoverflow.com/questions/9819271/why-is-mysql-innodb-insert-so-slow https://serverfault.com/questions/118504/how-to-improve-mysql-insert-and-update-performance – ceejayoz Jun 14 '17 at 19:02
  • @ceejayoz thanks . I will check these out – Ramin Omrani Jun 14 '17 at 19:04
  • 1
    Batch processing is your friend. Maybe do 10 to 100 inserts in a single statement? `ON DUPLICATE KEY UPDATE` could work for updates unless better is needed. NOTE: fewer transactions is also generally good if you're using transactions. – ebyrob Jun 14 '17 at 19:05
  • 1
    What makes you think it`s slow?Also you are comparing mongo with mysql which is apple to oranges.Show some sample data for inserts or better yet that loop you mentioned. – Mihai Jun 14 '17 at 19:08
  • @Mihai It does 100's of operations and Ramin wants (or needs) 1000's of operations per second therefore, by definition, it's an order of magnitude "too slow". – ebyrob Jun 14 '17 at 19:10
  • @Mihai because its not even 1000 transactions/s on a 8 GB RAM core i5 CPU . it should not be like this in my opinion – Ramin Omrani Jun 14 '17 at 19:10
  • I have read on a question of Stackoverflow that mysql is capable of thousands of transactions/s – Ramin Omrani Jun 14 '17 at 19:11
  • @ebyrob I have tries transactions for my inserts & updates but no different results . It has to be some misconfiguration – Ramin Omrani Jun 14 '17 at 19:12
  • @Ramin If you can isolate to mostly `insert` transactions and just modify the insert code to do 2 inserts every execute call that might give you an idea if batching might help. Of course, you're right if it's tuning it'll be a lot more difficult for us to help from a generic standpoint. Also you're 100% autocommit is off? – ebyrob Jun 14 '17 at 19:16
  • @ebyrob thanks for the help – Ramin Omrani Jun 14 '17 at 19:19
  • Dup of https://dba.stackexchange.com/questions/176301/mysql-insert-update-is-so-slow – Rick James Jun 29 '17 at 03:17

2 Answers2

3

Inserting rows one at a time, and autocommitting each statement, has two overheads.

Each transaction has overhead, probably more than one insert. So inserting multiple rows in one transaction is the trick. This requires a code change, not a configuration change.

Each INSERT statement has overhead. One insert has about 90% over head and 10% actual insert.

The optimal is 100-1000 rows being inserted per transaction.

For rapid inserts:

  • Best is LOAD DATA -- if you are starting with a .csv file. If you must build the .csv file first, then it is debatable whether that overhead makes this approach lose.
  • Second best is multi-row INSERT statements: INSERT INTO t (a,b) VALUES (1,2), (2,3), (44,55), .... I recommend 1000 per statement, and COMMIT each statement. This is likely to get you past 1000 rows per second being inserted.

Another problem... Since each index is updated as the row is inserted, you may run into trouble with thrashing I/O to achieve this task. InnoDB automatically "delays" updates to non-unique secondary indexes (no need for INSERT DELAYED), but the work is eventually done. (So RAM size and innodb_buffer_pool_size come into play.)

If the "thousands" of rows/second is a one time task, then you can stop reading here. If you expect to do this continually 'forever', there are other issues to contend with. See High speed ingestion .

Rick James
  • 135,179
  • 13
  • 127
  • 222
2

For insert, you might want to look into the INSERT DELAYED syntax. That will increase insert performance, but it won't help with update and the syntax will eventually be deprecated. This post offers an alternative for updates, but it involves custom replication.

One way my company's succeeded in speeding up inserts is by writing the SQL to a file, and then doing using a MySQL LOAD DATA INFILE command, but I believe we found that required the server's command line to have the mysql application installed.

I've also found that inserting and updating in a batch is often faster. So if you're calling INSERT 2k times, you might be better off running 10 inserts of 200 rows each. This would decrease the lock requirements and decrease information/number of calls sent over the wire.

cwallenpoole
  • 79,954
  • 26
  • 128
  • 166
  • Thanks buddy . after a while I resolved this by using transactions and sending all data in one go to the database server – Ramin Omrani Jun 15 '17 at 07:48
  • I will write it here later – Ramin Omrani Jun 15 '17 at 07:48
  • `INSERT DELAYED` applies to MyISAM and not InnoDB. `LOAD DATA` is an SQL statement, so if you can issue "raw" SQL, it should not need the `mysql` client. However, the overhead of writing the file may wipe out most of `LOAD's` benefit. – Rick James Dec 15 '21 at 05:03