0

I've been trying to get my head around multiprocessing. The problem is all the examples I've come across don't seem to fit my scenario. I'd like to multiprocess or thread work that involves sharing a list from an argument, now of course I don't want an item from the said list being worked on twice so the work needs to be divided out to each new thread/process (or across processes).

Any advice on the approach I should be looking at would be appreciated.

I am aware my code below is not correct by any means, it is only to aid in visualising what I am trying to attempt to explain.

SUDO

def work_do(ip_list)
    for ip in list
        ping -c 4 ip

def mp_handler(ip_range):
    p = multiprocessing.Pool(4)
    p.map(work_do, args=(ip_range))

ip_list = [192.168.1.1-192.168.1.254]
mp_handler(ip_list)

EDITED:

Some Working Code

import multiprocessing
import subprocess

def job(ip_range):
    p = subprocess.check_output(["ping", "-c", "4", ip])
    print p

def mp_handler(ip_range):
    p = multiprocessing.Pool(2)
    p.map(job, ip_list)

ip_list = ("192.168.1.74", "192.168.1.254")

for ip in ip_list:
    mp_handler(ip)

If you run the above code, you'll notice both IP's are run twice. How do I manage the processes to only work on unique data from the list?

iNoob
  • 1,375
  • 3
  • 19
  • 47

2 Answers2

1

What you are currently doing should pose no problem, but if you want to manually create the processes and then join them later on:

import subprocess
import multiprocessing as mp


# Creating our target function here
def do_work(args):
    # dummy function
    p = subprocess.check_output(["ping", "-c", "4", ip])
    print(p)

# Your ip list
ip_list = ['8.8.8.8', '8.8.4.4']

procs = []  # Will contain references to our processes
for ip in ip_list:
    # Creating a new process
    p = mp.Process(target=do_work, args=(ip,))

    # Appending to procs
    procs.append(p)

    # starting process
    p.start()

# Waiting for other processes to join
for p in procs:
    p.join()
Games Brainiac
  • 80,178
  • 33
  • 141
  • 199
  • Thanks for the reponse @Games Brainiac, I tested the above code and it seemed to work perfectly. However implementing it into my own code seemed to work until closer inspection showed that the response from the query to the ip was returning with a different ip. IE the data seemed to get mixed up, request and response werent tied correctly, – iNoob Oct 12 '14 at 01:11
  • @iNoob You need to modify the `do_work` function to print tuples of the result instead of just printing to the console. What I've shown you is a trivial example that you can build upon. – Games Brainiac Oct 12 '14 at 03:51
  • @iNoob: you shouldn't print from multiple processes without a lock. To avoid the mixed output; you could move the reporting of the results to the main process as in my answer. And if all you want is a status then there is no point to use `mp.Process()` here. `ping` is already a separate process, see [the link I've provided in my answer](http://stackoverflow.com/a/12102040/4279). – jfs Oct 12 '14 at 04:08
1

To ping multiple ip addresses concurrently is easy using multiprocessing:

#!/usr/bin/env python
from multiprocessing.pool import ThreadPool # use threads
from subprocess import check_output

def ping(ip, timeout=10):
    cmd = "ping -c4 -n -w {timeout} {ip}".format(**vars())
    try:
        result = check_output(cmd.split())
    except Exception as e:
        return ip, None, str(e)
    else:
        return ip, result, None

pool = ThreadPool(100) # no more than 100 pings at any single time
for ip, result, error in pool.imap_unordered(ping, ip_list):
    if error is None: # no error
       print(ip) # print ips that have returned 4 packets in timeout seconds

Note: I've used ThreadPool here as a convient way to limit number of concurrent pings. If you want to do all pings at once then you don't need neither threading nor multiprocessing modules because each ping is already in its own process. See Multiple ping script in Python.

Community
  • 1
  • 1
jfs
  • 399,953
  • 195
  • 994
  • 1,670