1

I need to invoke one API with series of input and load the response in gcs bucket. Current code is sequential and taking long time to get all responses for all input id. Is there any way to parallelize the API call and load process ? Current code :

def get_api_response():
    rows = list of ids
    for id in rows:
       try:  #below set of lines needs to be executed in parallel for a set of ids
            response = requests.get("url to call" + id )
            if "NOT_FOUND" in response.json:
                print('No data found')
            else:
                api_response = response.json()
                dt = {"currentDate": timestr}
                api_response.update(dt)
                ot=json.dumps(api_response)
                print(json.dumps(api_response))
                g = upload_to_bucket(blob_name, json.dumps(api_response), bucket_name)
                print(g)
        except Exception as e:
            print(e)
  • 1
    [multiprocessing](https://docs.python.org/3/lib) – Lei Yang Feb 25 '22 at 08:08
  • 1
    Hi, I found this [stackoverflow thread](https://stackoverflow.com/a/16992374/15774177) which deals with the similar problem that you are facing, let me know if it helps. – Zeenath S N Mar 01 '22 at 12:14

0 Answers0