0

My question is not about SPEED, i'm keen to minimize memory usage of my scripts. LAMP Web Application on limited share hosting resource (1GB Virtual Mem only) .

  1. Application reads from mySQL at every page visit to load from sys_config table and user table (pretty normal operation)

Should i load the configs to session instead? Knowing sessions are basically disk writing activity

Will this be lower memory use? (important) will this be faster execution? (less important)

  1. Application also writes to mySQL at every page load to maintain bits of data (last visit, and few micro data) this is imporant as it keeps the system aware if a user is still online

Should i write this microdata to file instead? (and periodically save it to DB if needed later?)

Will this be lower memory use? (important) will this be faster execution? (less important)

  1. Application uses AjAX and while user is on page, it AJAX loads data in the background at every 10 or so second interval. Each page has several sets of data that could be treated and thereby loaded differently (ie: contacts, online users, Groups). I separated each http/ajax request for each set of data assuming smaller chunks of requests will be easier/lighter for the server to process rather than one AJAX request pulling and preparing different sets of data at once.

Is this a good strategy to minimizing php memory usage?

BrownChiLD
  • 3,545
  • 9
  • 43
  • 61
  • 2
    Note on `Knowing sessions are basically disk writing activity` sql does the same, all databases are real files in the filesystem. – JustOnUnderMillions Mar 13 '17 at 15:26
  • @JustOnUnderMillions not all, for instance mysql provide a storage engine which is using the [memory](https://dev.mysql.com/doc/refman/5.7/en/memory-storage-engine.html) to store the data; – hassan Mar 13 '17 at 15:28
  • @hassan Sure, you are right, but by default all mysql databases are located in the filesystem. And `MEMORY` is the only engine that does use memory. :-) – JustOnUnderMillions Mar 13 '17 at 15:30
  • `memory use` It depends more on what and how you do it. Like do you `SELECT * ` or only select what is needed. Do you make `file()` that load a given file fully into memory or do you use `fopen` and read line by line. Do you `unset` stuff after using it. Do you send `html` with ajax or only plain data that is renderd on client side. And many more..... – JustOnUnderMillions Mar 13 '17 at 15:35

2 Answers2

0

Session data is completely stored in memory and uses a disk only to save data for next request. The whole data that you read from MySQL will be stored in the memory first.

PHP and MySQL have they own heap memory with gabage collector which pre-allocate pretty big amount of memory for internal uses, so writing small chunks of data like last visit timestamp to a disk will not give noticeable improvement.

By the same reason using small chunk requests will be less efficient, because PHP will allocate more memory then it actually need for each request. So more optimal chunk size will be around 1Gb/max_php_process_count. In other words, it should be as big as possible to maximize memory using efficiency.

Anyway, all optimization on this level depends on your PHP version and the simplest (and probably most efficient) way is using swap.

Vasily L
  • 146
  • 1
  • 4
  • hmm. i see what you're getting at.. so basically it's better to group up the different data set requests in one big call rather making several calls.. will keep note of that. bTW im using PHP Version 5.4.36 .. are the newer PHP versions faster? I actually was thinking of swap and i asked my hosts why not put excess mem use it on disk rather than terminate, slow downs are fine rather than terminations. they said no ;( – BrownChiLD Mar 13 '17 at 16:30
0

Answer greatly depends on purpose of your program.

MySQL server uses however much memory you give it (usually 64mb or 128mb). It uses that memory to optimize your requests.

Performance wise, MySQL is about 500x times slower than sessions (RAID 0 3xSSD) for simple reads/writes.

Generally speaking, if don't need to keep any persistent data, go with sessions or Redis. If you care about storing user data, go with databases. Databases connections/requests do not require much memory at all and will make sure that your data remains intact. Store all of your data in database as long as you do not do any searches on the data itself and use indexes to access it.

You can make database a lot faster by enabling query caching (usually bad practice) which will make your repetitive AJAX requests amazingly fast.

Summary: For performance/memory permanently store settings/data in database, but load values into session when user logs in. Whenever you need to write or retrieve information use database. If you want something to be amazingly fast, use Redis or memcached and create background script that refreshes data that's stored in it every couple seconds, while making users grab data only from memcached/Redis.

Dimi
  • 1,255
  • 11
  • 20
  • Been thinking about caching, or even using couchdb and stuff, but these things seem to be geared for speed. im more focused on which method has least mem usage. If I remember correctly, memcache is super fast because it keeps everything in memory = counter intuitive to what i need to achieve ==less mem use. – BrownChiLD Mar 13 '17 at 16:27
  • @BrownChiLD Memcached allows you to set how much memory it can use max. By sacrificing 64MB RAM you can achieve amazing performance increase. 1GB RAM should be more than enough for all of your Databases, Caching, Apache and PHP scripts. There are very few things that require huge chunks of memory during runtime. If you are doing heavy processing, you will need to implement Job Queue, and run 2-3 workers that have memory restrictions to do all the work. But for pretty much anything else, you should not have any issues with memory if you use MySQL/Memcached. – Dimi Mar 13 '17 at 16:33
  • thanks for the tip. unfortunately, 1GB of ram is not enough and it's causing a great deal of pain for us .. and newp our web app is small and handles very small datasets (tables database has less than 500 rows) .. feel free to check my other post: http://stackoverflow.com/questions/42768234/implementing-a-strategy-to-programatically-load-balance-my-php-web-app – BrownChiLD Mar 13 '17 at 16:44
  • You can consider implementing process manager for your jobs. But realistically speaking, 1GB is definitely not enough given how much your app consumes on average. Even with process management, your average memory used will be greater than 1GB. (something like this http://stackoverflow.com/questions/42512692/how-to-check-if-there-is-a-there-is-a-wget-instance-running/42513196#42513196 ) – Dimi Mar 13 '17 at 18:48
  • i see.. tnx... yeh im pulling my hair out trying to figure out why my app is eating so much ram. tnx tho – BrownChiLD Mar 14 '17 at 15:48