1

I have a loop that goes through a Range(300) and creates a Thread with 'import threading'.. So this creates 300 threads, which is correct.

However I'm getting a problem where not all the threads start up, and a error is shown in console. as below

  File "/usr/local/lib/python2.7/dist-packages/scapy/supersocket.py", line 29, in send
return self.outs.send(sx)
error: [Errno 105] No buffer space available

Q: Is there a way I can increase the buffer?

Q: Is this a python limit, or a process limit?

Dusty Boshoff
  • 1,016
  • 14
  • 39

2 Answers2

1

An easy way to limit the amount of concurrent threads is to use concurrent.futures.ThreadPoolExecutor. Create an instance with the maximum number of threads as argument:

from concurrent.futures import ThreadPoolExecutor

executor = ThreadPoolExecutor(100)
for i in range(300):
   executor.submit(do_something, i)
BlackJack
  • 4,476
  • 1
  • 20
  • 25
-3

concurrent.futures is working for me. Thank @BartoszMarcinkowski.

what I did was as follows.

import concurrent.futures

for i in my_list:
   concurrent.futures.ThreadPoolExecutor(max_workers=1).submit(start_my_function)
Dusty Boshoff
  • 1,016
  • 14
  • 39
  • If you create a `ThreadPoolExecuter` _per task_ it is just a much more complex way of simply starting a thread per task by `threading.Thread(target=start_my_function).start()`. If your ”solution” avoids hitting any limits it is just because the additional overhead prevents too many threads started/running at the same time, but purely as an accident. A different operating system or faster computer might show the same resource exhaustion as the original code. – BlackJack Aug 06 '15 at 14:15
  • so what do you suggest? – Dusty Boshoff Aug 09 '15 at 09:44
  • I suggest using _one_ `ThreadPoolExecuter` for _all_ tasks, i.e. using that API as it was intended. – BlackJack Aug 10 '15 at 13:24