0

I've got a rather generic database setup on Apache/php/Mariadb, using some very large tables. A report run on these tables might take a few minutes. The operations are merely nested SELECTs and not writing to the database.

I notice that if I run a report in one browser window. If I try to run a different report on the database in a second window, it stalls and seems to wait until the other report is complete.

I can't figure out what configuration setting or additional commands I might need to incorporate to allow concurrent operations of this nature. Any help or assistance would be appreciated.

I'm running: Apache/2.4.6 (CentOS) OpenSSL/1.0.1e-fips PHP/5.4.16, 5.5.52-MariaDB. The tables are type InnoDB

I assume there may be some sort of transactional/locking going on that I can disable? I'm merely reading tables and not updating anything.

Trent Three
  • 211
  • 2
  • 10
  • Which engine? Please provide the competing queries and `SHOW CREATE TABLE`. Look at `SHOW PROCESSLIST`. – Rick James Jul 13 '17 at 20:38
  • The engine is InnoDB. Show create table Table_1 shows just basic columns, no extra/unusual attributes other than a primary key field that has auto increment. Show processlist at various times shows one of the 2-3 sub-queries that are nested such as: ` Query | 0 | Sorting result | select * from Table_1 where provnum='255284' and trgt_qtr='2013Q2' ORDER by res_cat ASC` The only unusual thing I'm doing is setting `ini_set('max_execution_time', 600);` because some of these reports can run long. – Trent Three Jul 14 '17 at 16:07
  • Also I have some indexes in place on certain key fields. I'm just at a loss as to what might be locking the tables for reading? – Trent Three Jul 14 '17 at 17:18

3 Answers3

0

It seems to be doing so because the task you gives overloads the cpu / ram of your system. There isn't much you can do, at least to my knowledge. I think it really comes down to overloading the cpu. Of course the easy answer is to get some better parts so you can execute the query better. If your cpu has multiple threads it might be a ram problem. Since you're using CentOS, try running

   top

during your report to see if the problems lies in processing power or not enough ram.

Hope this helps :)

Firelumet
  • 55
  • 7
  • Thanks for the suggestion.. I looked into it and it looks like while the report is running I'm at 19.1% utilization. More importantly, I can pull up other sites on the web server using the database and they load fine. It appears like it may be some sort of table locking that is happening, preventing another process from accessing the tables until the first one is done. But I don't have any explicit commands of this nature and the queries are not updating the database. I'm still stumped. – Trent Three Jul 13 '17 at 16:03
  • I've also tried putting "LOCK TABLES table1 READ, table2 READ;" and that doesn't help. For some reason the second query on the database won't start until the first one finishes even though there are no updates, only selects. – Trent Three Jul 13 '17 at 16:44
0
INDEX(provnum, trgt_qtr, res_cat)

would speed up the query a lot, thereby possibly avoiding the problem in question. Or at least mitigating it.

Rick James
  • 135,179
  • 13
  • 127
  • 222
  • I have indexes in place. It's just a lot of data to crunch. My question is... why shouldn't I be able to have another process reading the tables while they're in use? I've never encountered this problem before, although I've never had database queries that churned through this much data to the point where the queries were registered in minutes instead of ms. Why isn't the server handling concurrent queries? It's not the time I'm worried about - it's the fact that no other process on the same data will run until the first one finishes. – Trent Three Jul 14 '17 at 17:17
  • Do you have _that composite_ index in place? I do _not_ mean 3 indexes, one per field. – Rick James Jul 14 '17 at 17:33
  • As for the question of battling queries, I do need to see the things I mentioned earlier; nothing is obvious in what you have presented so far. – Rick James Jul 14 '17 at 17:34
  • no composite index, but that's a good suggestion.. I'll incorporate that if I need it. The problem here was ultimately due to a session limitation. – Trent Three Jul 14 '17 at 23:15
0

Actually, it looks like the problem is not related to the database, but to the web server Apache, limiting the number of concurrent connections allowed.

I ran a report, on one system, then pulled up the web interface on my phone and was able to run another report concurrently.

I believe the actual problem is related to a limitation that is by default enabled in Apache, only allowing one open session at a time (per browser/session). This would not normally be noticeable because session execution is typically instantaneous, but if you're running a process that takes a lot of time, it will become obvious there's a limit imposed somewhere. It may be useful to note there is a correlation between this and database performance that may trick users into thinking this is a table locking issue when it's really a web server connection throttling issue.

More details here and here.

My solution turned out to be quite simple. I simply add this command before the long report generation process begins:

session_write_close();

This allows another concurrent session to operate while the existing one is running.

Trent Three
  • 211
  • 2
  • 10
  • Generally this will be caused by the php session system which limits you to one "running" request from the same client. Take a look at http://php.net/manual/en/function.session-write-close.php to close sessions before starting long-running parts of your script. Or if you can, run without sessions entirely. – ontrack Jul 14 '17 at 17:30
  • Thanks - this is exactly what I think it may be. The question is, can I disable these limitations for only my script and keep them in place for everybody else? This would be done by immediately doing a session-write-close before spawning the long report generation? Will the browser window still stay open for the time long enough to generate the report? – Trent Three Jul 14 '17 at 17:34
  • Apache's `MaxClients` (or whatever it is called now) must be set below mysql's `max_connections`. – Rick James Jul 14 '17 at 17:35
  • "Sessions" are useless if you ever need to scale beyond one Apache server. – Rick James Jul 14 '17 at 17:36
  • @TrentThree yes you can call session_write_close right before your report generation and leave everything else the same. As long as you don't want to change any session variable after calling session_write_close everything will be the same. – ontrack Jul 14 '17 at 17:41
  • I've confirmed that is the solution. A simple call to `session_write_close()` allowed me to run multiple reports. I've updated my text to make this clear. It appears many people are running into this problem, and the solution wasn't completely clear, as evidenced in the wide variety of answers people have given. – Trent Three Jul 14 '17 at 17:46