3

I have a python program which 1) Reads from a very large file from Disk(~95% time) and then 2) Process and Provide a relatively small output (~5% time). This Program is to be run on TeraBytes of files .

Now i am looking to Optimize this Program by utilizing Multi Processing and Multi Threading . The platform I am running is a Virtual Machine with 4 Processors on a virtual Machine .

I plan to have a scheduler Process which will execute 4 Processes (same as processors) and then Each Process should have some threads as most part is I/O . Each Thread will process 1 file & will report result to the Main Thread which in turn will report it back to scheduler Process via IPC . Scheduler can queue these and eventually write them to disk in ordered manner

So wondering How does one decide number of Processes and Threads to create for such scenario ? Is there a Mathematical way to figure out whats the best mix .

Thankyou

Manik Mahajan
  • 1,566
  • 2
  • 13
  • 19
  • For `parallel processing`, the 'optimal' number of threads you can make is equal to the number of cores you have, as said [here](https://stackoverflow.com/questions/24149834/choosing-optimal-number-of-threads-for-parallel-processing-of-data) – Stack Oct 31 '20 at 08:46
  • Threads might not be what you want, as explained [here](https://realpython.com/python-gil/), for example. – Patol75 Oct 31 '20 at 08:59
  • 2
    If you single process program spends almost all of its time reading data, then parallel processing won't help. Each process will just spend most of its time waiting for an opportunity to read data from the disk whilst another process is utilising the disk. Multiprocessing helps you utilise additional CPUs, not magically make your disk reads faster. – Dunes Oct 31 '20 at 09:08
  • How many disks are serving the large file ? Increasing parallelism while your bottleneck seems to be I/O may not do much good: you may get less throughput from seek costs (if not spinning-disk kind of seek cost, maybe poorer read-ahead efficiency). But if you have, for example, 2 disks in md raid1 on Linux, having 2 I/O operations in parallel may double the throughput: each will read from one disk. – vpelletier Nov 04 '20 at 23:27
  • It doesn't seem that threading will help, with the way you've described your problem; threading would assist most when your I/O problem deals with multiple requests, but in this case you are waiting on one file. Unless you can specify which parts of this file to read for each thread, threading won't help speed up the file read. – jthree8 Nov 06 '20 at 03:10

5 Answers5

8

I think I would arrange it the inverse of what you are doing. That is, I would create a thread pool of a certain size that would be responsible for producing the results. The tasks that get submitted to this pool would be passed as an argument a processor pool that could be used by the worker thread for submitting the CPU-bound portions of work. In other words, the thread pool workers would primarily be doing all the disk-related operations and handing off to the processor pool any CPU-intensive work.

The size of the processor pool should just be the number of processors you have in your environment. It's difficult to give a precise size for the thread pool; it depends on how many concurrent disk operations it can handle before the law of diminishing returns come into play. It also depends on your memory: The larger the pool, the greater the memory resources that will be taken, especially if entire files have to be read into memory for processing. So, you may have to experiment with this value. The code below outlines these ideas. What you gain from the thread pool is overlapping of I/O operations greater than you would achieve if you just used a small processor pool:

from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor
from functools import partial
import os

def cpu_bound_function(arg1, arg2):
    ...
    return some_result



def io_bound_function(process_pool_executor, file_name):
    with open(file_name, 'r') as f:
        # Do disk related operations:
        . . . # code omitted
        # Now we have to do a CPU-intensive operation:
        future = process_pool_executor.submit(cpu_bound_function, arg1, arg2)
        result = future.result() # get result
        return result
    
file_list = [file_1, file_2, file_n]
N_FILES = len(file_list)
MAX_THREADS = 50 # depends on your configuration on how well the I/O can be overlapped
N_THREADS = min(N_FILES, MAX_THREADS) # no point in creating more threds than required
N_PROCESSES = os.cpu_count() # use the number of processors you have

with ThreadPoolExecutor(N_THREADS) as thread_pool_executor:
    with ProcessPoolExecutor(N_PROCESSES) as process_pool_executor:
        results = thread_pool_executor.map(partial(io_bound_function, process_pool_executor), file_list)

Important Note:

Another far simpler approach is to just have a single, processor pool whose size is greater than the number of CPU processors you have, for example, 25. The worker processes will do both I/O and CPU operations. Even though you have more processes than CPUs, many of the processes will be in a wait state waiting for I/O to complete allowing CPU-intensive work to run.

The downside to this approach is that the overhead in creating a N processes is far greater than the overhead in creating N threads + a small number of processes. However, as the running time of the tasks submitted to the pool becomes increasingly larger, then this increased overhead becomes decreasingly a smaller percentage of the total run time. So, if your tasks are not trivial, this could be a reasonably performant simplification.

Update: Benchmarks of Both Approaches

I did some benchmarks against the two approaches processing 24 files whose sizes were approximately 10,000KB (actually, these were just 3 different files processed 8 times each, so there might have been some caching done):

Method 1 (thread pool + processor pool)

from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor
from functools import partial
import os
from math import sqrt
import timeit


def cpu_bound_function(b):
    sum = 0.0
    for x in b:
        sum += sqrt(float(x))
    return sum

def io_bound_function(process_pool_executor, file_name):
    with open(file_name, 'rb') as f:
        b = f.read()
        future = process_pool_executor.submit(cpu_bound_function, b)
        result = future.result() # get result
        return result

def main():
    file_list = ['/download/httpd-2.4.16-win32-VC14.zip'] * 8 + ['/download/curlmanager-1.0.6-x64.exe'] * 8 + ['/download/Element_v2.8.0_UserManual_RevA.pdf'] * 8
    N_FILES = len(file_list)
    MAX_THREADS = 50 # depends on your configuration on how well the I/O can be overlapped
    N_THREADS = min(N_FILES, MAX_THREADS) # no point in creating more threds than required
    N_PROCESSES = os.cpu_count() # use the number of processors you have

    with ThreadPoolExecutor(N_THREADS) as thread_pool_executor:
        with ProcessPoolExecutor(N_PROCESSES) as process_pool_executor:
            results = list(thread_pool_executor.map(partial(io_bound_function, process_pool_executor), file_list))
            print(results)

if __name__ == '__main__':
    print(timeit.timeit(stmt='main()', number=1, globals=globals()))

Method 2 (processor pool only)

from concurrent.futures import ProcessPoolExecutor
from math import sqrt
import timeit


def cpu_bound_function(b):
    sum = 0.0
    for x in b:
        sum += sqrt(float(x))
    return sum

def io_bound_function(file_name):
    with open(file_name, 'rb') as f:
        b = f.read()
        result = cpu_bound_function(b)
        return result

def main():
    file_list = ['/download/httpd-2.4.16-win32-VC14.zip'] * 8 + ['/download/curlmanager-1.0.6-x64.exe'] * 8 + ['/download/Element_v2.8.0_UserManual_RevA.pdf'] * 8
    N_FILES = len(file_list)
    MAX_PROCESSES = 50 # depends on your configuration on how well the I/O can be overlapped
    N_PROCESSES = min(N_FILES, MAX_PROCESSES) # no point in creating more threds than required

    with ProcessPoolExecutor(N_PROCESSES) as process_pool_executor:
        results = list(process_pool_executor.map(io_bound_function, file_list))
        print(results)

if __name__ == '__main__':
    print(timeit.timeit(stmt='main()', number=1, globals=globals()))

Results:

(I have 8 cores)

Thread Pool + Processor Pool: 13.5 seconds Processor Pool Alone: 13.3 seconds

Conclusion: I would try the simpler approach first of just using a processor pool for everything. Now the tricky bit is deciding what the maximum number of processes to create, which was part of your original question and had a simple answer when all it was doing was the CPU-intensive computations. If the number of files you are reading are not too many, then the point is moot; you can have one process per file. But if you have hundreds of files, you will not want to have hundreds of processes in your pool (there is also an upper limit to how many processes you can create and again there are those nasty memory constraints). There is just no way I can give you an exact number. If you do have a large number of files, start with a smallish pool size and keep incrementing until you get no further benefit (of course, you probably do not want to be processing more files than some maximum number for these tests or you will be running forever just deciding on a good pool size for the real run).

Booboo
  • 38,656
  • 3
  • 37
  • 60
0

For parallel processing: I saw this question, and quoting the accepted answer:

In practice, it can be difficult to find the optimal number of threads and even that number will likely vary each time you run the program. So, theoretically, the optimal number of threads will be the number of cores you have on your machine. If your cores are "hyper threaded" (as Intel calls it) it can run 2 threads on each core. Then, in that case, the optimal number of threads is double the number of cores on your machine.

For multiprocessing: Someone asked a similar question here, and the accepted answer said this:

If all of your threads/processes are indeed CPU-bound, you should run as many processes as the CPU reports cores. Due to HyperThreading, each physical CPU cores may be able to present multiple virtual cores. Call multiprocessing.cpu_count to get the number of virtual cores.

If only p of 1 of your threads is CPU-bound, you can adjust that number by multiplying by p. For example, if half your processes are CPU-bound (p = 0.5) and you have two CPUs with 4 cores each and 2x HyperThreading, you should start 0.5 * 2 * 4 * 2 = 8 processes.

The key here is understand what machine are you using, from that, you can choose a nearly optimal number of threads/processes to split the execution of you code. And I said nearly optimal because it will vary a little bit every time you run your script, so it'll be difficult to predict this optimal number from a mathematical point of view.

For your specific situation, if your machine has 4 cores, I would recommend you to only create 4 threads max, and then split them:

  • 1 to the main thread.
  • 3 for file reading and process.
Stack
  • 1,028
  • 2
  • 10
  • 31
0

using multiple processes to speed up IO performance may not be a good idea, check this and the sample code below it to see wether it is helpful

Wilson.F
  • 49
  • 3
0

One idea can be to have a thread only reading the file (If I understood well, there is only one file) and pushing the independent parts (for ex. rows) into queue with messages.
The messages can be processed by 4 threads. In this way, you can optimize the load between the processors.

François B.
  • 1,096
  • 7
  • 19
0

On a strongly I/O-bound process (like what you are describing), you do not necessarily need multithreading nor multiprocessing: you could also use more advanced I/O primitives from your OS.

For example on Linux you can submit read requests to the kernel along with a suitably sized mutable buffer and be notified when the buffer is filled. This can be done using the AIO API, for which I've written a pure-python binding: python-libaio (libaio on pypi)), or with the more recent io_uring API for which there seems to be a CFFI python binding (liburing on pypy) (I have neither used io_uring nor this python binding).

This removes the complexity of parallel processing at your level, may reduce the number of OS/userland context switches (reducing the cpu time even further), and lets the OS know more about what you are trying to do, giving it the opportunity of scheduling the IO more efficiently (in a virtualised environment I would not be surprised if it reduced the number of data copies, although I have not tried it myself).

Of course, the downside is that your program will be more tightly bound to the OS you are executing it on, requiring more effort to get it to run on another one.

vpelletier
  • 901
  • 7
  • 6