3

For my development machine I need no data consistency in case of a crash. Is there a config for a Debian-like system, that optimizes MySQL for speed (even if it sacrifices reliability)?

So something like: Cache the last 1 GB in RAM. Don't touch the disk with data until the 1 GB is used.

Ole Tange
  • 31,768
  • 5
  • 86
  • 104
  • You really need to tune the config for your app, for your server - there isn't a universal solution – Clive Dec 09 '16 at 03:43
  • It is not going on the server. It is going on the development machine. It is impossible to tune the config for the app, as the program is not done: It is being developed - being the whole point of the development machine. Keeping the last 1 GB of transactions in RAM without flushing them to disk will in my case be sufficient. – Ole Tange Dec 09 '16 at 03:49
  • 2
    I am unsure why this question is downvoted. This is an interesting use case where MySQL speed is prioritized over reliability. MyISAM engine or Memory engine could serve well. tpmfs might also be helpful. – zedfoxus Dec 09 '16 at 04:02
  • @zedfoxus, I did not downvote but I imagine someone thought that it's not exactly a question about coding. Some people have a very narrow view of [the topics appropriate for Stack Overflow](http://stackoverflow.com/help/on-topic). My opinion is that tuning MySQL for a developer's environment clearly fits "software tools commonly used by programmers," but other people may disagree. – Bill Karwin Dec 09 '16 at 06:07
  • Hi @BillKarwin, that makes sense. Thank you. – zedfoxus Dec 09 '16 at 14:31
  • Take a look at this youtube video that explains more the options https://www.youtube.com/watch?v=0CqMv0ucqFA –  Jul 14 '17 at 23:18
  • What version of MYSQL do you use ? –  Jul 14 '17 at 23:18
  • @JohnR I prefer the solution is not very dependent on the version. You can assume it is > 5.5 (debian-stable level). – Ole Tange Jul 15 '17 at 15:29

4 Answers4

2

You can disable many of the InnoDB configurations for durability, at the risk of increased risk of losing data. But sometimes you want to operate the database in Running with scissors mode because the original data is safely stored somewhere else, and the copy in your test database is easily recreated.

This blog describes Reducing MySQL durability for testing. You aren't going to see any official MySQL recommendation to do this for any purpose other than testing!

Here's a summary of changes you can make in your /etc/my.cnf:

[mysqld]
# log_bin (comment this out to disable the binary log)
# sync_binlog=0 (irrelevant if you don't use the binary log)
sync_frm=0
innodb_flush_log_at_trx_commit=0
innodb_doublewrite=0
innodb_checksums=0
innodb_support_xa=0
innodb_log_file_size=2048M # or more

He also recommends to increase innodb_buffer_pool_size, but the size depends on your available RAM.

For what it's worth, I recently tried to set innodb_flush_log_at_trx_commit=0 in the configuration in the default Vagrant box I built for developers on my team, but I had to back out that change because it was causing too much lost time for developers who were getting corrupted databases. Just food for thought. Sometimes it's not a good tradeoff.

This doesn't do exactly what you asked (keep the last 1GB of data in RAM), as it still operates InnoDB with transaction logging and the log flushes to disk once per second. There's no way to turn that off in MySQL.

You could try using MyISAM, which uses buffered writes for data and index, and relies on the filesystem buffer. Therefore it could cache some of your data (in practice I have found that the buffer flushes to disk pretty promptly, so you're unlikely to have a full 1GB in RAM at any time). MyISAM has other problems, like lack of support for transactions. Developing with MyISAM and then using InnoDB in production can set you up for some awkward surprises.

Here's a couple of other changes you could make in your MySQL sessions for the sake of performance, but I don't recommend these even for development, because it can change your application behavior.

set session unique_checks=0;
set session foreign_key_checks=0;

Some people recommend using the MEMORY storage engine. That has its own problems, like size limits, table-locking, and lack of support for transactions.

I've also experimented with trying to put tables or tmpdir onto a tmpfs, but I found that didn't give nearly the performance boost you might expect. There's overhead in an RDBMS that is not directly related to disk I/O.

You might also like to experiment with MyRocks, a version of MySQL including the RocksDB storage engine for MySQL. Facebook developed it and released it as open-source. See Facebook rocks an open source storage engine for MySQL (InfoWorld). They promise it reduces I/O, it compresses data, and does other neat things.

But again, it's a good rule of thumb to make your development environment as close as possible to your production environment. Using a different storage engine creates a risk of not discovering some bugs until your code reaches production.

Bottom line: Tuning MySQL isn't a magic bullet. Maybe you should consider designing your application to make more use of microservices, caches, and message queues, and less reliance on direct SQL queries.

Also, I'd recommend to always supply your developers the fastest SSD-based workstation you can afford. Go for the top of the line on CPU and RAM and disk speed.

Mac Pro circa 2013

Ole Tange
  • 31,768
  • 5
  • 86
  • 104
Bill Karwin
  • 538,548
  • 86
  • 673
  • 828
2

What kind of queries are going on? One of my mantras: "You cannot configure your way out of a performance problem."

Here's one thing that speeds up InnoDB, wrt transactions:

innodb_flush_log_at_trx_commit = 2

There is a simple way to speed up single-row inserts by a factor of 10.

Some 'composite' indexes can speed up a SELECT by a factor of 100.

Reformulating a WHERE can sometimes speed up a query by a factor of 100.

Rick James
  • 135,179
  • 13
  • 127
  • 222
1

@Bill Karwin's answer has useful mysql settings to improve performance. I have used them all and was able to achieve a roughly 2x performance improvement.

However, what gave me the biggest performance boost (nearly 15x faster) for my use case -- which was reloading a mysql dump -- was to mount the underlying filesystem (ext4) using the nobarriers option.

mount -o remount,nobarrier /

More info here

You should only consider this if you have a separate partition (or logical volume) mounted at /var/lib/mysql, so that you can make this tradeoff only for MySQL, not your entire system.

jdhildeb
  • 3,322
  • 3
  • 17
  • 25
0

Although this answer may not hit exactly the questions you ask, consider creating your tables with MEMORY engine as documented here: http://dev.mysql.com/doc/refman/5.7/en/memory-storage-engine.html

A typical use case for the MEMORY engine involves these characteristics:

Operations involving transient, non-critical data such as session management or caching. When the MySQL server halts or restarts, the data in MEMORY tables is lost.

In-memory storage for fast access and low latency. Data volume can fit entirely in memory without causing the operating system to swap out virtual memory pages.

A read-only or read-mostly data access pattern (limited updates).

Give that a shot.

My recommendation, even for a development machine, would be to use the default InnoDB. If you choose to do transactions, InnoDB will be helpful.

This blog can help you run MySQL off of tmpfs: http://jotschi.de/2014/02/03/high-performance-mysql-testdatabase/. User Jotschi also speaks about that in a SO answer #10692398

Community
  • 1
  • 1
zedfoxus
  • 35,121
  • 5
  • 64
  • 63