1

I need to generate a job-id for user-request call which I thought of handling it through python-theading using below block:

from flask_restful import Resource
from flask import copy_current_request_context
import uuid
import time
class ScoreReportGenerator:
    def trigger_job():
        try:
            # Do memory-intensive process
            return "success"
        except:
            return "failure"

class JobSubmitter(Resource):
    def post(self):
        job_id = uuid.uuid4()

        @copy_current_request_context
        def start_score_report_generator_thread(payload_data, job_id):
            score_report_generator = ScoreReportGenerator()
            score_report_generator.trigger_job()

        t = threading.Thread(target=start_score_report_generator_thread, args=[payload_data, job_id])
        t.setDaemon(False)
        t.start()
        
        response = dict()
        response["status"] = "RUNNING"
        response["jobId"] = str(job_id)
        return response

Here what had been noticed, around 70GB of RAM is occuppied by this spawned thread, after completion of thread 70GB of RAM remains occupied. After killing the whole python application, RAM is getting released.

Looking forward for suggestions to release RAM-Memory consumption, any help welcomed!

Thanks in advance!

  • get the type object who lead to the leak of memory.Then release the object when it finish for use. – ElapsedSoul Dec 15 '20 at 08:52
  • There are 1725 hits on Stack Overflow for looking for release memory in Python questions, including excellent answers explaining why this happens and why you should not worry about it. Btw, I have 8GB of RAM in my system and 8GB of swap and never need more. What uses 70GB? – Menno Hölscher Dec 15 '20 at 09:45
  • If you want to run ML-Related models obviously you will end up with using this much memory of RAM, there are numerous posts across Popular-sites why ML-models take much amount of RAM when you try to load models into memory to get immediate results. If possible, point to few best posts with a proper reasoning, Glad to see it! – Mohamed Niyaz Sirajudeen Dec 15 '20 at 10:37
  • 1
    "If you want to run ML-Related models obviously you will end up with using this much memory of RAM..." Indeed, I just did not expect ML projects to use Flask. The first answer [here](https://stackoverflow.com/questions/15455048) tells how memory is managed. It is 7 years old, so maybe for Python2. The accepted answer [here](https://stackoverflow.com/questions/7101404/) does not reason, but gives you a way to see if you are really leaking memory. Similar to the first one, but with a different way of explaining is the accepted answer to [this one](https://stackoverflow.com/questions/39100971). – Menno Hölscher Dec 15 '20 at 12:20

1 Answers1

0

A solution that might solve the problem for you is to use the garbage collector and enable it.

import gc
gc.enable()

And check if the automatic gargabe collection solution solves your problem. If it don't check on the details if there's a manual procedure that does it for you.

https://docs.python.org/3/library/gc.html

Gabriel Pellegrino
  • 1,042
  • 1
  • 8
  • 17