3

I'm trying to implement Fire and Forget mechanism using FastAPI. I'm facing few difficulties when implementing the mechanism.

I have two applications. One is developed with FastAPI and other is Flask. FastAPI will run in AWS Lambda and it will send requests to the Flask app running on AWS ECS.

Currently, I was able to send a request to the Flask API and receive an immediate response from the FastAPI app. But I see FastAPI still running bg_tasks.add_task(make_request, request) in the background which times out after lambda execution threshold time (15 mins).

Fast API application:


def make_request(data):
    """
    Function to make a post request to flask application
    :param data: Data from the user to write into the file
    :return: None
    """

    print("***** Inside post *****")

    requests.post(url=root_url, data=data)

    print("***** Post completed *****")


@router.post("/write-to-file")
async def write_to_file(request: Dict, bg_tasks: BackgroundTasks):
    """
    Function to queue the requests and return to the post function
    :param request: Request from the user
    :param bg_tasks: Background task instance
    :return: Some message
    """

    print(f"****** Request call started ******")

    bg_tasks.add_task(make_request, request)

    print(f"****** Request completed ******")

    return {"Message": "Data will be written into the file"}

Flask Application:

@app.route('/', methods=['POST'])
def write():
    """
    Function to write the request data into the file
    :return:
    """
    request_data = request.form
    try:
        print(f"Sleep time {int(request_data['sleep_time'])}")
        time.sleep(int(request_data["sleep_time"]))
        request_data = dict(request_data)
        request_data['current_time'] = str(datetime.now())
        with open("data.txt", "a") as f:
            f.write("\n")
            f.write(json.dumps(request_data, indent=4))

        return {"Message": "Success"}

    except Exception as e:
        return {"Message": e}

Fast API (http://localhost:8000/write-to-file/) calls the write_to_file method, which adds all the tasks (requests) into the background queue and runs them in background.

This function does not wait for the process to be completed. However, it returns the response to the client side. make_request method will then trigger the Flask endpoint (http://localhost:5000/), which in turn will process the request and write to a file. Consider make_request as one AWS lambda, if flask application takes more hours to process, the lambda will wait for longer time.

Is it possible to kill lambda once the request is published, or do something else to solve the timeout issue?

davidism
  • 121,510
  • 29
  • 395
  • 339
  • 1
    Welcome to Stack Overflow! Could you clarify why did you decide to use the background queue in the first place? Is that because you want your Lambda to return the result early? – Nikolay Shebanov Apr 08 '21 at 19:35
  • 1
    Thank you for the question. Yes, my goal is to not to wait for the response, instead it should process the request at background. Here the function make_request should not wait untill the response from the Flask app returns same as write_to_file method. – Mythily Devaraj Apr 09 '21 at 04:57

1 Answers1

2

With the current setup, your lambda would run for as long, as the Flask endpoint would require to process your request. Effectively, both APIs run exactly the same time.

This is because the requests.post in the lambda function must wait for the response to finish. Given that you don't care about the results of that response, I can think of several other ways to solve this.

If I were you, I would move the queue processing to the ECS side. Then the only thing that lambda would only be responsible for putting a job into the queue that the ECS worker would process when it has capacity.

This option would let you get rid of one of the APIs: you would be able to query the Flask API directly and kill the lambda function, or instead kill the Flask API and run a worker process on ECS.

Alternatively, you could respond early on the Flask API side, which would finish your HTTP request, and thus the lambda execution, sooner. This can be confusing to set up and defeats the purpose of exposing an HTTP API in the first place. Also, under some circumstances, the Flask request execution could be terminated by the webserver after a default timeout (~30 seconds).

And finally, in case you really-really want to leave your code as it is now, you could set a request to timeout after a short period of time. In case you go this route, make sure to choose a long enough timeout for Flask to start processing the request:

try:
    requests.post(url=root_url, data=data, timeout=5) # throw after 5 seconds of waiting
except requests.exceptions.Timeout:
    pass
Nikolay Shebanov
  • 1,363
  • 19
  • 33
  • 1
    We couldn't able to move the queue process to **ECS**, because we cant span additional ECS when we have more task to process. Currently when I add more process we scale the ECS as per request so it defeats that purpose. – Mythily Devaraj Apr 09 '21 at 15:13
  • 1
    Is there a way to kill lambda once the request is posted? self destructing lambda? or reduce the lambda execution threshold or time will that help? because once the request is posted to ECS we don't want to listen the response. – Mythily Devaraj Apr 09 '21 at 15:15
  • I've updated my answer with an option to abort the request after certain amount of seconds, but I would still consider the other options first. Could you elaborate on your ECS scaling point, did I get it correctly that you'd like to leverage auto-scaling on ECS based on number of HTTP requests? – Nikolay Shebanov Apr 09 '21 at 16:52
  • Thank you Nikolay, let me try these things. – Mythily Devaraj Apr 12 '21 at 08:01