Use the Threading library. Keep the main thread open for handling responses and spin off 'job' threads that are thread.joined() to each other to form a queue.
You'll need to provide the API user with a job id(best to persist these, and perhaps progress and status update info, outside the app in a database), and then allow them to query their job's status/download their job from another endpoint. You could keep another queue of threads handling anything compute intensive related to collecting/downloading.
All that said, this can all also be accomplished using a micro service architecture in which you have one app scheduling jobs, one app retrieving/processing data, and one app handling status/download requests. These would be joined via http interfaces(restful would be great) and a database for common persistence of data.
The benefit of this last approach is in each app being independently scalable from an availability and resources perspective within some framework like Kubernetes.
UPDATE:
Just read your original post and your main issue seems to be persisting your data in a global variable, rather than a database. Keep your data in a database, and provide it to clients either through a separate application, or a set of threads that are set aside and available in your current app.
UPDATE response to OP comment:
Stefano, in the use case you're describing, there is no need for any of the components to be connected to each other. They all only need to be connected to the database.
The data collection service should collect the data, and then submit it to the database for storage, where the "request data" component can find and retrieve it.
If there is a need for user input to this process, then the "submit request for data" component should accept that request, provide the user with an id, and then store that job's requirements in the database for the data collector component to discover. You would then need one more component for serving a status/progress on the job from the database to the user.
What DB are you using? If it's slow/busy, you can scale the resources available to it (RAM), or you can look at batching your updates from the data collector, which is the most likely culprit of unnecessary DB overhead. How many transactions are you submitting per second? And of what size?
Ed anche, si sei italiano, poui domandarmi in la lingua tua si sia piu facile communicare questi detagli technichi.