Specification of the problem:
I am running some complex tasks in python and to speed it up, I decided to use pythons multiprocessing library. It worked pretty well, but after some time I started to wonder, how much time are the Locks, I use, consuming and how much are the processes blocking each other.
The structure of processes is following:
One process that updates shared list between processes. Code of data update is something like this:
lock.acquire()
list_rex[0] = pat.some_list
list_rex[1] = pat.some_dictionary
lock.release()
where list_rex and lock are defined by
list_rex = manager.list([[some_elements], {some_elements}])
lock = multi.Lock()
And then there are several processes, that once in a while updates their own memory space with these list.The code is as follows:
lock.acquire()
some_list = list_rex[0]
some_dict = list_rex[1]
lock.release()
some_list and some_dict are somehow related so I can not allow processes to have information in some_list from different update than is in some_dict.
And my question is, how fast are the methods acquire() and release()? In my case they can be called within seconds and sometimes milliseconds. And/Or is there some way how to avoid using locks in my case?
Thank you for your time.
EDIT: after considering your comment my question probably should be - how are proxy lists affecting my calculations? I use "some_list" and "some_dict" really a lot to be read from after the update.