The docs aren't very explicit about this. In general: Once a process is finished and freed it catches the next available task (therefore: queued up). If you try
from multiprocessing import Pool
from time import sleep
def sleeping(i):
print(f"{i} started")
sleep(5)
print(f"{i} ended")
if __name__ == "__main__":
with Pool(processes=5) as p:
results = [p.apply_async(sleeping, args=(i,)) for i in range(10)]
results = [result.get() for result in results]
then you'll get a result like
0 started
1 started
2 started
3 started
4 started
3 ended
0 ended
5 started
6 started
1 ended
7 started
2 ended
8 started
4 ended
9 started
5 ended
6 ended
7 ended
8 ended
9 ended
Depending on the framework it could also be that once a process has finished its workload, it is terminated, a new one started instead, and then the next availabe task taken over by the new process. From the docs:
Note Worker processes within a Pool typically live for the complete duration of the Pool’s work queue. A frequent pattern found in other systems (such as Apache, mod_wsgi, etc) to free resources held by workers is to allow a worker within a pool to complete only a set amount of work before being exiting, being cleaned up and a new process spawned to replace the old one. The maxtasksperchild argument to the Pool exposes this ability to the end user.