I have a simple program that resizes a batch of images in python:
def resize(image_path_list):
# open image
# resize image
# rename image
# save new image
I'm attempting to increase the processing speed via multithreading - which was chosen over multiprocessing because A) this is an imported process (from a gui) so multiProcessing under __main__ == name wasn't an option and B) I figured opening and closing the file from disk was my optimization opportunity.
However utilizing the following thread instantiation I gained no speed increase (64.1 vs 64.05s) - I was expecting to halve it:
t1 = threading.Thread(target=resize(first_half_of_list))
t2 = threading.Thread(target=resize(second_half_of_list))
t1.start()
t2.start()
t1.join()
t2.join()
I'm testing on a batch of 150+ 1mB+ images, Any thoughts?