16

I have implemented a simple microservice using Flask, where the method that handles the request calculates a response based on the request data and a rather large datastructure loaded into memory. Now, when I deploy this application using gunicorn and a large number of worker threads, I would simply like to share the datastructure between the request handlers of all workers. Since the data is only read, there is no need for locking or similar. What is the best way to do this?

Essentially what would be needed is this:

  • load/create the large data structure when the server is initialized
  • somehow get a handle inside the request handling method to access the data structure

As far as I understand gunicorn allows me to implement various hook functions, e.g. for the time the server gets initialized, but a flask request handler method does not know anything about the gunicorn server data structure.

I do not want to use something like redis or a database system for this, since all data is in a datastructure that needs to be loaded in memory and no deserialization must be involved.

The calculation carried out for each request which uses the large data structure can be lengthy so it must happen concurrently in a truly independent thread or process for each request (this should scale up by running on a multi-core computer).

jpp1
  • 2,019
  • 3
  • 22
  • 43
  • Possible duplicate of [Sharing Memory in Gunicorn?](https://stackoverflow.com/questions/27240278/sharing-memory-in-gunicorn) – rite2hhh Aug 08 '19 at 17:47

1 Answers1

22

You can use preloading.

This will allow you to create the data structure ahead of time, then fork each request handling process. This works because of copy-on-write and the knowledge that you are only reading from the large data structure.

Note: Although this will work, it should probably only be used for very small apps or in a development environment. I think the more production-friendly way of doing this would be to queue up these calculations as tasks on the backend since they will be long-running. You can then notify users of the completed state.


Here is a little snippet to see the difference of preloading.

# app.py

import flask

app = flask.Flask(__name__)

def load_data():
    print('calculating some stuff')
    return {'big': 'data'}

@app.route('/')
def index():
    return repr(data)

data = load_data()

Running with gunicorn app:app --workers 2:

[2017-02-24 09:01:01 -0500] [38392] [INFO] Starting gunicorn 19.6.0
[2017-02-24 09:01:01 -0500] [38392] [INFO] Listening at: http://127.0.0.1:8000 (38392)
[2017-02-24 09:01:01 -0500] [38392] [INFO] Using worker: sync
[2017-02-24 09:01:01 -0500] [38395] [INFO] Booting worker with pid: 38395
[2017-02-24 09:01:01 -0500] [38396] [INFO] Booting worker with pid: 38396
calculating some stuff
calculating some stuff

And running with gunicorn app:app --workers 2 --preload:

calculating some stuff
[2017-02-24 09:01:06 -0500] [38403] [INFO] Starting gunicorn 19.6.0
[2017-02-24 09:01:06 -0500] [38403] [INFO] Listening at: http://127.0.0.1:8000 (38403)
[2017-02-24 09:01:06 -0500] [38403] [INFO] Using worker: sync
[2017-02-24 09:01:06 -0500] [38406] [INFO] Booting worker with pid: 38406
[2017-02-24 09:01:06 -0500] [38407] [INFO] Booting worker with pid: 38407
Jared
  • 25,627
  • 7
  • 56
  • 61
  • 1
    Thanks! I understand the preload option now. How would one go about sharing very simple global data structures that need to get updated then? Let's say a simple counter to count all requests received by any worker. Because of COW, this will not work, so are there any alternatives short of having my own separate process just for doing this? – jpp1 Feb 24 '17 at 17:15
  • 2
    @Johsm I'd suggest using something like redis for that. – Jared Feb 24 '17 at 21:45
  • Great answer with examples. It's interesting to know that uwsgi and gunicorn choose the very opposite default. uwsgi preloads by default. – ospider Apr 22 '21 at 09:34