I have a Python script that is being executed within the anaconda prompt in a Windows environment. As this script analyses one variable at a time, the command line loops over the files in a folder, and for every file it executes the script once. It is done in this way, as looping within a script eventually results in a memory error. However, running the script for files one by one is too slow, and I could for sure run the script for five files at once, speeding up the proces five times.
I know that there are modules as subprocess and multiprocessing, but I have a feeling there might be a more simplistic way of doing it. Unfortunately I was not able to find one.
The code I am running at this moment is as follows
for /F %i in ('dir "Directory_containing_files" /b /s') do (python Executed_script.py -i %i -o "Folder_to_write_output_to")
I hope there is some piece of code that could look something like this:
for /F %i in ('dir "Directory_containing_files" /b /s') do (python Executed_script.py -i %i -o "Folder_to_write_output_to") -n 10
With the desired output that it runs 10 files in parallel.