2

I am writing a python program which pings devices and reports online/offline status and latency. Right now it is working fine but whenever there are devices offline or not responding the output hangs for about 5 seconds.

My question is can I either ping everything independently and not sequentially and/or can I set a time filter of some sort on the subprocess so that, if things aren't updated after ~100-200ms it moves on to the next?

Here is the relevant part of the code I am currently working on

for item in lines:
#remove whitespaces, etc from item.
hostname = item.rstrip()

#Run ping and return output to stdout.
#subprocess.Popen runs cmdline ping, pipes the output to stdout. .stdout.read() then reads that stream data and assigns it to the ping_response variable
ping_response = subprocess.Popen(["ping", hostname, "-n", '1'], stdout=subprocess.PIPE).stdout.read()
word = "Received = 1"
word2 = "Destination host unreachable."

#Regex for finding the time values and inputting them in to a list.
p = re.compile(ur'(?<=time[<=])\S+')
x = re.findall(p, ping_response)
if word2 in ping_response:
    print "Destination Unreachable"
elif word in ping_response:
    print "%s is online with latency of " % hostname +x[0]
else:
    print "%s is offlineOffline" % hostname
Abraxas
  • 341
  • 1
  • 9
  • 28
  • 1
    This would be a very good application for multithreading, because your program isn't CPU or I/O bound, but instead is waiting most of the time (on the network, or on a timeout). [Doug Hellmann's article on the `threading` module](http://pymotw.com/2/threading/) would be a good place to start. – Lukas Graf May 07 '15 at 19:40
  • @LukasGraf: [You don't need threads](http://stackoverflow.com/a/12102040/4279) here (though [they can be used](http://stackoverflow.com/a/26321632/4279)); `Popen()` creates a separate process and returns immidiately without waiting for the child process to exit. You don't want 1000 threads/processes to ping 1000 hosts in parallel. Async. I/O could be used here (a `select`-like loop such as used in `twisted`, `asyncio`, `gevent` libraries). – jfs May 09 '15 at 12:43
  • related: [Multiple ping script in Python](http://stackoverflow.com/q/12101239/4279) – jfs May 09 '15 at 12:46
  • @J.F.Sebastian *"You don't want 1000 threads/processes to ping 1000 hosts in parallel"* - that's exactly what you're doing in your `Popen()` multiple ping script though, no? I don't see any sort of pooling there. With threads you'd obviously use thread pooling like you did in your other example. – Lukas Graf May 09 '15 at 13:09
  • @LukasGraf: [the script](http://stackoverflow.com/a/12102040/4279) creates 100 processes, not 1000. If OP needs to ping thousands hosts concurrently then [async. io](http://stackoverflow.com/a/4868866/4279) should be considered (as well as [thread pooling](http://stackoverflow.com/a/26321632/4279)). – jfs May 09 '15 at 13:15

2 Answers2

2

My question is can I either ping everything independently and not sequentially

Sure. There are a variety of solutions to that problem, including both the threading and multiprocessing modules.

and/or can I set a time filter of some sort on the subprocess so that, if things aren't updated after ~100-200ms it moves on to the next?

You can actually set a timeout on ping itself, at least the Linux version, using the -W option:

   -W timeout
          Time to wait for a response, in seconds. The option affects only
          timeout in absence of any responses, otherwise  ping  waits  for
          two RTTs.
larsks
  • 277,717
  • 41
  • 399
  • 399
2

Ping has a timeout feature which will help your script's efficiency.

-W waittime
             Time in milliseconds to wait for a reply for each packet sent.  If a reply arrives later, the packet
             is not printed as replied, but considered as replied when calculating statistics.

Also, here are some other utilities to ping efficiently.

Community
  • 1
  • 1
fixxxer
  • 15,568
  • 15
  • 58
  • 76