0

We want to get better performance with our application based on MySQL and PHP.

The current situation is an e-learning system that receives some "bursts" of queries depending on the day of the week and/or the hour of the day. (hundreds of students starting drills at the same time from different schools for example)

As you may guess, these systems need real time calculations all the time.

We have very few slow queries in general and we try to improve them when they appear in the logs.

The hardware is self hosted, it's currently a VPS and it's not our hardware, but we keep hardware upgrade as part of the solution.

We have a specific table that is read/write intensive. We think it comes to disk access to that table. (study logs)

We are trying to figure out a hardware and/or software setup that could increase performance especially when that log table is needed.

One solution we are thinking about, is using replication to balance the "write" and "read" queries. (proxySQL + replication) Our fear with that setup is what happens if the master is non available...

One software possibility we are currently developing is creating a "summary" table that is calculated only once a day or so. That should release some stress at least on 2 screens of the application. The bottleneck in that case seemed to be regarding the creation of temporary table and number of join tables.

I can add details as needed, please don't hesitate to ask.

EDIT: reformulate

What are the possible MySQL setups available to get better performance? Replication, cluster, other?

Thank you very much for your time.

Cedric Simon
  • 735
  • 5
  • 11
  • OK. Thank you for your status report ... although this community is focussing on **questions**. – specializt Jan 25 '17 at 11:01
  • I see, I didn't clearly say it maybe. I edited the post. – Cedric Simon Jan 25 '17 at 11:29
  • You may want to at least get a trial of an APM such as new relic or appdynamics to identify where your actual bottlenecks are. If most of the time is spent waiting on database reads, you should absolutely add some slave database servers. – Sam Dufel Jan 25 '17 at 16:52
  • Please provide a few of the common/slow queries, together with `SHOW CREATE TABLE`. Sometimes an improvement is as simple as adding a composite index. – Rick James Jan 25 '17 at 18:41

1 Answers1

0

Since you are not putting any code to your question I just answer you generally:

  1. Calculate memory usage by memory_get_usage(). put it at the last line of your codes

  2. Check loading average by sys_getloadavg(). put it at the last line of your codes

  3. Check running time by microtime(). Accurate way to measure execution times of php scripts

  4. Check which query cost you more by: select * from sys.x$statement_analysis

  5. Calculate performance just by ONE run and try to make it better.

  6. Find where you are hashing. for example find where you are using crypt. some hashing method can cost you too much. find where you really don't need them and on that place you can use low cost hashing like sha1 or md5. for example if you hashing user avatars inside a public folder you can lower your cost by using something like md5. But you NEVER EVER go for performance where security matters. for example for password hashing never go for performance.

  7. You can cache some place that can be cached. It can really help you for overall performance. read PHP Cache Dynamic Pages To Speed Up Load Times. Also you can cache with Apache How To Configure Content Caching Using Apache Modules On A VPS and nginx A Guide to Caching with NGINX and NGINX Plus.

  8. In your queries don't use something that is not working with indexes (you can't always do that but do it as you can). For example find_in_set makes high impact on performance. Specially when you are dealing with huge archives.

  9. If you really think making Log can impact on your performance, save log in another server. You can connect from current server to MySQL on another server by using IP. Also you can make API for this.

  10. Always think about better architecture. Sometimes with reviewing codes you see something can be drop or replace with better idea.

Community
  • 1
  • 1
ICE
  • 1,667
  • 2
  • 21
  • 43
  • Thank you very much for your answer, we'll try those and analyse things to then decide what action(s) will be the best. Regarding a possible scale-out with replication, a simple master-slave would be enough, are there points we should be take particular attention about? – Cedric Simon Jan 26 '17 at 05:45
  • @CedricSimon It's one of your options but I highly recommend to measure your current memory usage, execution time and loading average in one run and try to make it better. Sometimes you can see with deleting one heavy hash or removing one find_in_set the running time will be 2x faster without increasing any hardware. In heavy works you need to make your codes having great performance first and because of that you need to see everything. I also Edit #7 and add #10 to my answer. and maybe this one can be #11 if you didn't do it: http://stackoverflow.com/a/39931493/5617911 – ICE Jan 26 '17 at 15:13