0

I want to encrypt a list of 300 numbers using homomorphic (paillier) encryption. This takes roughly 3000ms on my notebook, a lot longer on my raspberry pi. I would like to speed that up, so I tried to use multithreading:

from multiprocessing.dummy import pool as ThreadPool

def test_1_performance_of_encryption(self):

    print("Test: Blind encryption performance test")
    print("-----------------------------")
    print()
    # Only blinds are encrypted
    for y in range(0, ((NUMBER_OF_RUNS//SENSOR_SAMPLES_PER_BLIND)*NUM_OF_DATA_PROCESSORS)):
        print("Round {}:".format(y+1))
        print("Encrypting {} blinds...".format(NUM_OF_SENSOR_SAMPLES))
        encrypted_blinds.clear()
        millis_start = int(round(time.time() * 1000))

        for x in range(0, NUM_OF_SENSOR_SAMPLES):
            encrypted_blinds.append(public_keys[0].encrypt(blinds[x]))

        millis_end = int(round(time.time() * 1000))
        time_elapsed_enc.append(millis_end - millis_start)
        print("Time elapsed: {}ms".format(time_elapsed_enc[y]))

    print("Test finished. Time elapsed:")
    print("Min: {} | Max: {} | Avg: {}".format(min(time_elapsed_enc), max(time_elapsed_enc),
                                               (sum(time_elapsed_enc)/len(time_elapsed_enc))))
    print()

@profile
def test_1a_performance_of_encryption_multithreaded(self):

    print("Test: Blind encryption performance test with {} threads".format(NUM_OF_THREADS))
    for y in range(0, ((NUMBER_OF_RUNS//SENSOR_SAMPLES_PER_BLIND)*NUM_OF_DATA_PROCESSORS)):
        print("Round {}:".format(y+1))
        print("Encrypting {} blinds...".format(len(blinds)))
        millis_start = int(round(time.time() * 1000))

        encrypted_blinds_multithreaded = pool.map(public_keys[0].encrypt, blinds)

        millis_end = int(round(time.time() * 1000))
        time_elapsed_enc_multithread.append(millis_end - millis_start)
        print("Time elapsed: {}ms".format(time_elapsed_enc_multithread[y]))

    print("Test finished. Time elapsed:")
    print("Min: {} | Max: {} | Avg: {}".format(min(time_elapsed_enc_multithread), max(time_elapsed_enc_multithread),
                                               (sum(time_elapsed_enc_multithread) / len(time_elapsed_enc_multithread))))
    print()

However, both tests finish in more or less exactly the same amount of time. While the single thread method uses one core 100%, the multithreaded version uses all of them, but still creates exactly a load of 1 (equal to 1 core at 100%). Am I doing anything wrong here? I have read this question and it's answers: Python multiprocessing.Pool() doesn't use 100% of each CPU, however I don't believe the reason here is interprocess communication, as it would be very weird that it settles exactly at a load of 1...

EDIT: I was using multiprocessing.dummy instead of multiprocessing. I think this used multiple threads instead of processes, and multiple threads can not be executed in parallel in Python due to a thing called GIL (global interpreter lock). I fixed it by changing multiprocessing.dummy to multiprocessing. I now have n processes and 100% CPU usage.

Gasp0de
  • 1,199
  • 2
  • 12
  • 30
  • Based on the code above, there is very little we can do to help you. What is `public_keys`? What does `encrypt` do? What is `blinds`? If you want a meaningful response from the community, you'll need to give use a [MVCE](https://stackoverflow.com/help/mcve) to work with. – MPA Apr 26 '18 at 14:45
  • Sorry, I thought this was the fastest way to show you what I am doing. In the meantime, I believe I found the solution to my problem (shame on me for not searching properly before): https://stackoverflow.com/questions/203912/does-python-support-multiprocessor-multicore-programming Apparently, the Python interpreter does not allow parallel exectution of threads... – Gasp0de Apr 26 '18 at 15:02
  • I think that post is outdated. I have recently performed computations in parallel within a Python interpreter using `multiprocessing`, and I think they worked around the issue with the GIL some time ago already. – MPA Apr 26 '18 at 15:08
  • You are correct in a way. Multiprocessing is using multiple processes, that, unlike threads, are not blocked by the GIL it seems. The mistake I made was introduced by copying the example from the question in my original post, which, for whatever reason uses multiprocessing.dummy instead of multiprocessing. – Gasp0de Apr 26 '18 at 15:19
  • If your problem is solved, could you please update your post, clearly explaining what was wrong with it and how you solved it? Other people with similar problems may one day find this post, but perhaps not read through the comments. – MPA Apr 26 '18 at 18:29
  • @MPA did that. Thank you for your time! – Gasp0de Apr 27 '18 at 16:10

0 Answers0