0

I have an ubuntu server (2x 3.0Ghz quad cores, 16GB ram and raid 1 on 15k sas drives) running a normal LAMP stack where approximately 5 clients connect a second to a php script. The clients communicate a small bit of information and that data is updated in the MySQL database.

In the background on the server I have a php script reviewing the data once a second and it updates the database.

There are only a couple records being updated over and over, not inserted, so table sizes are REALLY small (4 rows max).

That said, I normally have an ssh session open watching the speed at which the background script runs. After a month of running straight with SELECT and UPDATE queries taking about 100 microseconds a piece (.0001 seconds), suddenly UPDATE started taking 20000+ microseconds (.02 seconds) per query, no matter what database, what column was used or updated (primary key or otherwise), etc. I have restarted everything, including the server, deleted all my databases (only the ones I created, not the system databases), blocked the clients, run queries from the console mysql and PhpMyAdmin and the result is the same.

Here is an example, run from the console with no load or clients connected to the server:

mysql> CREATE DATABASE `test` /*!40100 DEFAULT CHARACTER SET utf8 */;
Query OK, 1 row affected (0.00 sec)

mysql> CREATE TABLE `test` (
  `idtest` int(11) NOT NULL AUTO_INCREMENT,
  `field1` varchar(64) NOT NULL,
  `field2` varchar(64) NOT NULL,
  `field3` int(11) NOT NULL,
  PRIMARY KEY (`idtest`)
 ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Query OK, 0 rows affected (0.14 sec)

mysql> INSERT INTO `test`.`test` (`idtest`, `field1`, `field2`, `field3`) VALUES (NULL, 'Testing', 'None', '0');
Query OK, 1 row affected (0.02 sec)

mysql> SELECT * FROM test;
+--------+---------+--------+--------+
| idtest | field1  | field2 | field3 |
+--------+---------+--------+--------+
|      1 | Testing | None   |      0 |
+--------+---------+--------+--------+
1 row in set (0.00 sec)

mysql> UPDATE `test`.`test` SET `field3` = '1' WHERE `test`.`idtest` = 1;
Query OK, 1 row affected (0.03 sec)
Rows matched: 1  Changed: 1  Warnings: 0

From the above example it also appears INSERT is affected. I have run the same queries above on another server with about 1/10th of the resources and it runs the queries in 100 to 200 microseconds, as expected. I suspect it has something to do with InnoDB cache, but I could be wrong.

Here is some info on the server and MySQL:

Load averages: load average: 0.04, 0.04, 0.05

Server: Localhost via UNIX socket

Server type: MySQL

Server version: 5.6.28-0ubuntu0.15.04.1 - (Ubuntu)

Protocol version: 10

User: root@localhost

Server charset: UTF-8 Unicode (utf8)

and

mysql> SHOW VARIABLES LIKE '%query%';
+------------------------------+-----------------------------------+
| Variable_name                | Value                             |
+------------------------------+-----------------------------------+
| binlog_rows_query_log_events | OFF                               |
| ft_query_expansion_limit     | 20                                |
| have_query_cache             | YES                               |
| long_query_time              | 10.000000                         |
| query_alloc_block_size       | 8192                              |
| query_cache_limit            | 1048576                           |
| query_cache_min_res_unit     | 4096                              |
| query_cache_size             | 16777216                          |
| query_cache_type             | OFF                               |
| query_cache_wlock_invalidate | OFF                               |
| query_prealloc_size          | 8192                              |
| slow_query_log               | OFF                               |
| slow_query_log_file          | /var/lib/mysql/xxxxx-slow.log |
+------------------------------+-----------------------------------+
13 rows in set (0.00 sec)

and

mysql> SHOW VARIABLES LIKE '%cache%';
+--------------------------------+----------------------+
| Variable_name                  | Value                |
+--------------------------------+----------------------+
| binlog_cache_size              | 32768                |
| binlog_stmt_cache_size         | 32768                |
| have_query_cache               | YES                  |
| host_cache_size                | 279                  |
| innodb_disable_sort_file_cache | OFF                  |
| innodb_ft_cache_size           | 8000000              |
| innodb_ft_result_cache_limit   | 2000000000           |
| innodb_ft_total_cache_size     | 640000000            |
| key_cache_age_threshold        | 300                  |
| key_cache_block_size           | 1024                 |
| key_cache_division_limit       | 100                  |
| max_binlog_cache_size          | 18446744073709547520 |
| max_binlog_stmt_cache_size     | 18446744073709547520 |
| metadata_locks_cache_size      | 1024                 |
| query_cache_limit              | 1048576              |
| query_cache_min_res_unit       | 4096                 |
| query_cache_size               | 16777216             |
| query_cache_type               | OFF                  |
| query_cache_wlock_invalidate   | OFF                  |
| stored_program_cache           | 256                  |
| table_definition_cache         | 615                  |
| table_open_cache               | 431                  |
| table_open_cache_instances     | 1                    |
| thread_cache_size              | 8                    |
+--------------------------------+----------------------+
24 rows in set (0.00 sec)

my.cnf:

[mysqld]
innodb_autoinc_lock_mode=0

And lastly:

ubuntu@xxxxx:~$ df
Filesystem                     1K-blocks    Used Available Use% Mounted on
udev                             8202368       0   8202368   0% /dev
tmpfs                            1642808    9600   1633208   1% /run
/dev/mapper/xxxxx--vg-root  53035144 7827988  42490076  16% /
tmpfs                            8214020       0   8214020   0% /dev/shm
tmpfs                               5120       0      5120   0% /run/lock
tmpfs                            8214020       0   8214020   0%     /sys/fs/cgroup
/dev/sda1                         240972  112893    115638  50% /boot
tmpfs                            1642808       0   1642808   0% /run/user/1000

Any ideas? Thank you in advance for the help!

Update edit:

Given the very odd and sudden onset of the issue, I am beginning to suspect it may be hardware related.

I completely purged MySQL from the server and reinstalled from the repository, and it had no impact on the situation. I am running raid 1 with two 15k sas drives on a Perc6i - I am going to start there. This may end up more of a ServerFault issue than a StackOverflow. I will report back after doing more digging.

Community
  • 1
  • 1
user1080943
  • 85
  • 10
  • have you tried running the top command from the linux terminal to see what's mysql process id is doing? what's the memory% usage and what's the cpu% – unixmiah Jul 14 '16 at 02:13
  • you have somewhat of a complex server setup, especially with RAID. you need to look at some knowledge before you install mysql on a server like that. if you have the time, you can look take a look at this book ftp://203.157.240.9/pub/docs/High.Performance.MySQL.3rd.Edition.pdf – unixmiah Jul 14 '16 at 02:25
  • you might want to try it on freebsd if you're looking for optimal performance. http://webcodingstudio.com/blog/freebsd-92-server-configuration-apache-php-mysql-dns-samba – unixmiah Jul 14 '16 at 02:31

2 Answers2

0

If it SUDDENLY started taking much longer, several things come to mind.

I won't go on about obvious things, like the mechanics of locking that may be relied upon to implement simultaneous updates in place.

I would opine OFFHAND that you have crossed a file size threshold where you now require, say, double-indirect disk block addressing rather than single-indirect block addressing. That would mean that each individual call to fetch the next disk block in logical sequence would be that much slower. Every single actual I/O would be penalized. I refer to ACTUAL I/O, like something that goes out and hits the disk block hash queue, which entails (at some point) tasking the disk controller, not just to saying read() and pulling data that has already been prefetched pursuant to the last disk I/O.

That's just a guess.

Are you really upset about 20,000 usec = 0.2 msec?

  • I thought about locking and related issues, but the issue remains with no clients connected other than the console. Also, the database size is under 500kb, so I am not sure where the file size would come into play, other than a log. And lastly, although 20,000 usec is perceived as a short period of time, it it noticeable to the naked eye when I do 20 updates per iteration (once per second) in my background script. – user1080943 Jul 14 '16 at 00:39
0

If you want to improve the speed of updates, consider using char instead of varchar fields and create unclustered indexes on the fields you use for comparison in your where clauses.

Also, don't measure your update speed by running one query, because in an update query, if you have a mechanical hard drive, the spin up time may affect the results. Windows may cache the database files and serve the data from ram too. Disable the superfetch service and run loops to test spin up delays.

Uğur Gümüşhan
  • 2,455
  • 4
  • 34
  • 62