I would like to create a python web api where in-memory state can be shared between requests. I understand that the recommended/best practice for this is to use memcached or redis. However, I do not want to use these, I want to have a local shared memory option:
- I will only ever run this application on one server. No clusters no load balancing no need to share memory between nodes.
- I'm also just interested to know, what if I wanted to write a cache service like memcached myself, how would I do it without using memcached?!
My current understanding is that Gunicorn spawns multiple different processes, and "recycles" web workers, meaning any "global" variable is not going to be around consistently between requests. I therefore reason that either I would need to find way to serve the wsgi app using only one process, and then share memory between threads, or I would need to find a way to share memory between processes. In either event though, how would I set that up if gunicorn is in control of the processes? And how would I prevent recycling of web workers losing the state?
I've also seen some recommendations to use gunicorns pre_load flag, but they say this is not good for production?
The shared memory will be read 99% of the time, and I don't mind locking the app when I need to write to it.
Can anyone point me to a reliable and/or documented way to set up, eg, flask/gunicorn so that I can load something into memory and then read from it between requests? I'm sure there must be something. Thanks.