I am implementing a random forest algorithm for a school assignment, to make it faster im using multiprocessing where each process creates a tree, these trees are limited to a certain depth, say 200 or 500.
The trees are stored in a dictionary defined like this:
manager = multiprocessing.Manager()
return_dict = manager.dict()
When I run the algorithm with depth 200 it works just fine, but at 500 the multiprocessing library produces an error:
Traceback (most recent call last):
File "C:\Users\agust\AppData\Local\Programs\Python\Python38\lib\multiprocessing\process.py", line 315, in _bootstrap
self.run()
File "C:\Users\agust\AppData\Local\Programs\Python\Python38\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\agust\PycharmProjects\tarea2\random_forest.py", line 27, in id3_aux
return_dict[index] = nodo_res
File "<string>", line 2, in __setitem__
File "C:\Users\agust\AppData\Local\Programs\Python\Python38\lib\multiprocessing\managers.py", line 834, in _callmethod
conn.send((self._id, methodname, args, kwds))
File "C:\Users\agust\AppData\Local\Programs\Python\Python38\lib\multiprocessing\connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "C:\Users\agust\AppData\Local\Programs\Python\Python38\lib\multiprocessing\reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
RecursionError: maximum recursion depth exceeded while pickling an object
If I undestand correctly, it can't pickle the tree because it's too deep. Is there any way to increase the recursion limit for pickling, or store the trees in a different way to prevent this issue?