27

I have a Flask app, served by Nginx and Gunicorn with 3 workers. My Flask app is a API microservice designed for doing NLP stuff and I am using the spaCy library for it.

My problem is that they are taking huge number of RAM as loading the spaCy pipeline spacy.load('en') is very memory-intensive and since I have 3 gunicorn workers, each will take about 400MB of RAM.

My question is, is there a way to load the pipeline once and share it across all my gunicorn workers?

Lee
  • 3,044
  • 1
  • 12
  • 25

4 Answers4

1

I need to share Gigabytes of data among instances and use a memory mapped file (https://docs.python.org/3/library/mmap.html). If the amount of data you need to retrieve per request from the pool is small this works fine. Otherwise you can mount a ramdisk where you locate the mounted file.

As I am not familiar with SpaCy I am not sure if this helps. I would have one worker for actually processing the data while loading (spacy.load?) and writing the resulting doc (pickling/marshalling) to the mmf where the other workers can read from it.

To get a better feel of mmap have a look at https://realpython.com/python-mmap/

markus barth
  • 63
  • 1
  • 9
0

One workaround is, you can load the spaCy pipeline before-hand, pickle (or any comfortable way of serializing) the resultant object and store it in a DB or file system. Each worker can just fetch the serialized object, and simply deserialize it.

  • Would pickling it in a module work? I am thinking to load the module in a script, import it, then launch the workers. If doing this would the workers pick up the import? – OneLiner Feb 18 '21 at 15:24
  • 1
    When you load the module in a script and import, the import functionality will work for sure. But you will still face the problem of high memory consumption because each worker will load it independently again. You have to pickle it separately before launching the Flask app and store the serialized file in a place accessible by the workers. – Krithika Ramakrishnan Feb 19 '21 at 09:20
0

Sharing the pipeline in memory between workers may help you.

Please check gc.freeze

I think, just do this, in your app.py:

  1. freeze the gc
  2. load pipeline or any other resources that is going to use a big amount of memory
  3. unfreeze gc

and,

  • make sure your worker will not modify (directly or indirectly) any object created during freezing
  • pass the app.py to gunicorn

When fork happens, those memory page holding big resources will not be truly copied by os, because you make sure there are no write operations on it.

If you do not freeze gc, the memory pages will still be written, because gc is writing object reference counts. That why freeze matters.

I just know this way but I didn't try it.

Dharman
  • 30,962
  • 25
  • 85
  • 135
Nathan Hardy
  • 156
  • 6
0

This is an answer that works in 2021 using Python3.6 and 3.9 (tested both). I had the same setup as you, using flask to deploy a Spacy NLU API. The solution was to simply was to append --preload to the gunicorn command like so: gunicorn src.main:myFlaskApp --preload. This would cause the fork to happen after the entire src/main.py file has been executed, and not after the myFlaskApp = Flask(__name__).

thethiny
  • 1,125
  • 1
  • 10
  • 26