I have an ubuntu server (2x 3.0Ghz quad cores, 16GB ram and raid 1 on 15k sas drives) running a normal LAMP stack where approximately 5 clients connect a second to a php script. The clients communicate a small bit of information and that data is updated in the MySQL database.
In the background on the server I have a php script reviewing the data once a second and it updates the database.
There are only a couple records being updated over and over, not inserted, so table sizes are REALLY small (4 rows max).
That said, I normally have an ssh session open watching the speed at which the background script runs. After a month of running straight with SELECT and UPDATE queries taking about 100 microseconds a piece (.0001 seconds), suddenly UPDATE started taking 20000+ microseconds (.02 seconds) per query, no matter what database, what column was used or updated (primary key or otherwise), etc. I have restarted everything, including the server, deleted all my databases (only the ones I created, not the system databases), blocked the clients, run queries from the console mysql and PhpMyAdmin and the result is the same.
Here is an example, run from the console with no load or clients connected to the server:
mysql> CREATE DATABASE `test` /*!40100 DEFAULT CHARACTER SET utf8 */;
Query OK, 1 row affected (0.00 sec)
mysql> CREATE TABLE `test` (
`idtest` int(11) NOT NULL AUTO_INCREMENT,
`field1` varchar(64) NOT NULL,
`field2` varchar(64) NOT NULL,
`field3` int(11) NOT NULL,
PRIMARY KEY (`idtest`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Query OK, 0 rows affected (0.14 sec)
mysql> INSERT INTO `test`.`test` (`idtest`, `field1`, `field2`, `field3`) VALUES (NULL, 'Testing', 'None', '0');
Query OK, 1 row affected (0.02 sec)
mysql> SELECT * FROM test;
+--------+---------+--------+--------+
| idtest | field1 | field2 | field3 |
+--------+---------+--------+--------+
| 1 | Testing | None | 0 |
+--------+---------+--------+--------+
1 row in set (0.00 sec)
mysql> UPDATE `test`.`test` SET `field3` = '1' WHERE `test`.`idtest` = 1;
Query OK, 1 row affected (0.03 sec)
Rows matched: 1 Changed: 1 Warnings: 0
From the above example it also appears INSERT is affected. I have run the same queries above on another server with about 1/10th of the resources and it runs the queries in 100 to 200 microseconds, as expected. I suspect it has something to do with InnoDB cache, but I could be wrong.
Here is some info on the server and MySQL:
Load averages: load average: 0.04, 0.04, 0.05
Server: Localhost via UNIX socket
Server type: MySQL
Server version: 5.6.28-0ubuntu0.15.04.1 - (Ubuntu)
Protocol version: 10
User: root@localhost
Server charset: UTF-8 Unicode (utf8)
and
mysql> SHOW VARIABLES LIKE '%query%';
+------------------------------+-----------------------------------+
| Variable_name | Value |
+------------------------------+-----------------------------------+
| binlog_rows_query_log_events | OFF |
| ft_query_expansion_limit | 20 |
| have_query_cache | YES |
| long_query_time | 10.000000 |
| query_alloc_block_size | 8192 |
| query_cache_limit | 1048576 |
| query_cache_min_res_unit | 4096 |
| query_cache_size | 16777216 |
| query_cache_type | OFF |
| query_cache_wlock_invalidate | OFF |
| query_prealloc_size | 8192 |
| slow_query_log | OFF |
| slow_query_log_file | /var/lib/mysql/xxxxx-slow.log |
+------------------------------+-----------------------------------+
13 rows in set (0.00 sec)
and
mysql> SHOW VARIABLES LIKE '%cache%';
+--------------------------------+----------------------+
| Variable_name | Value |
+--------------------------------+----------------------+
| binlog_cache_size | 32768 |
| binlog_stmt_cache_size | 32768 |
| have_query_cache | YES |
| host_cache_size | 279 |
| innodb_disable_sort_file_cache | OFF |
| innodb_ft_cache_size | 8000000 |
| innodb_ft_result_cache_limit | 2000000000 |
| innodb_ft_total_cache_size | 640000000 |
| key_cache_age_threshold | 300 |
| key_cache_block_size | 1024 |
| key_cache_division_limit | 100 |
| max_binlog_cache_size | 18446744073709547520 |
| max_binlog_stmt_cache_size | 18446744073709547520 |
| metadata_locks_cache_size | 1024 |
| query_cache_limit | 1048576 |
| query_cache_min_res_unit | 4096 |
| query_cache_size | 16777216 |
| query_cache_type | OFF |
| query_cache_wlock_invalidate | OFF |
| stored_program_cache | 256 |
| table_definition_cache | 615 |
| table_open_cache | 431 |
| table_open_cache_instances | 1 |
| thread_cache_size | 8 |
+--------------------------------+----------------------+
24 rows in set (0.00 sec)
my.cnf:
[mysqld]
innodb_autoinc_lock_mode=0
And lastly:
ubuntu@xxxxx:~$ df
Filesystem 1K-blocks Used Available Use% Mounted on
udev 8202368 0 8202368 0% /dev
tmpfs 1642808 9600 1633208 1% /run
/dev/mapper/xxxxx--vg-root 53035144 7827988 42490076 16% /
tmpfs 8214020 0 8214020 0% /dev/shm
tmpfs 5120 0 5120 0% /run/lock
tmpfs 8214020 0 8214020 0% /sys/fs/cgroup
/dev/sda1 240972 112893 115638 50% /boot
tmpfs 1642808 0 1642808 0% /run/user/1000
Any ideas? Thank you in advance for the help!
Update edit:
Given the very odd and sudden onset of the issue, I am beginning to suspect it may be hardware related.
I completely purged MySQL from the server and reinstalled from the repository, and it had no impact on the situation. I am running raid 1 with two 15k sas drives on a Perc6i - I am going to start there. This may end up more of a ServerFault issue than a StackOverflow. I will report back after doing more digging.