2

I have a script that in the end executes two functions. It polls for data on a time interval (runs as daemon - and this data is retrieved from a shell command run on the local system) and, once it receives this data will: 1.) function 1 - first write this data to a log file, and 2.) function 2 - observe the data and then send an email IF that data meets certain criteria.

The logging will happen every time, but the alert may not. The issue is, in cases that an alert needs to be sent, if that email connection stalls or takes a lengthy amount of time to connect to the server, it obviously causes the next polling of the data to stall (for an undisclosed amount of time, depending on the server), and in my case it is very important that the polling interval remains consistent (for analytics purposes).

What is the most efficient way, if any, to keep the email process working independently of the logging process while still operating within the same application and depending on the same data? I was considering creating a separate thread for the mailer, but that kind of seems like overkill in this case.

I'd rather not set a short timeout on the email connection, because I want to give the process some chance to connect to the server, while still allowing the logging to be written consistently on the given interval. Some code:

 def send(self,msg_):
  """ 
  Send the alert message 
  :param str msg_: the message to send
  """
  self.msg_ = msg_
  ar = alert.Alert() 
  ar.send_message(msg_)

def monitor(self):
  """ 
  Post to the log file and
  send the alert message when
  applicable 
  """
  read = r.SensorReading()
  msg_ = read.get_message()

  msg_ = read.get_message() # the data 
  if msg_: # if there is data in general...
     x = read.get_failed() # store bad data       
     msg_ += self.write_avg(read)
     msg_ += "==============================================="
     self.ctlog.update_templog(msg_) # write general data to log  
     if x:
        self.send(x) # if bad data, send...   
Jonathan Hall
  • 75,165
  • 16
  • 143
  • 189
mmcbride1
  • 31
  • 2
  • How persistent should this email be? If the app is restarted or if the machine itself is restarted, should that email still be sent. – tdelaney Dec 11 '16 at 02:07
  • your question would be easier to read if it was broken into three or four paragraphs – Bryan Oakley Dec 11 '16 at 02:13
  • The app is started|stopped|restarted with (app start | stop | restart). If the app is restarted before that alert is sent then it should be detected upon restart and on the next poll. If that data IS NOT detected on the next poll, then that would indicate that we have returned to a stable state and there is no need for alert. I will be sure to better format the question next time. – mmcbride1 Dec 11 '16 at 02:15
  • I went ahead and broke the question out. Thanks for the tip . – mmcbride1 Dec 11 '16 at 02:29
  • Watch this - https://www.youtube.com/watch?v=Bv25Dwe84g0 – wwii Dec 11 '16 at 03:54
  • Look at the Python [subprocess](https://docs.python.org/2/library/subprocess.html "subprocess") module. – Charlie Martin Dec 11 '16 at 02:06
  • Thanks for the tip. For purposes of understanding, how would the subprocess work if for instance, the first execution (process A) met the criteria for sending the alert and was still waiting on connection, and then the second execution of the poll (process B) was carried out and also met the criteria for sending an alert with new data to send? Process B would wait for Process A to complete and send process A's data, and then process B would send it's data (or default server timeout) correct? I understand if there is not enough context here to respond. I appreciate the help very much. – mmcbride1 Dec 11 '16 at 02:41

3 Answers3

0

This is exactly the kind of case you want to use threading/subprocesses for. Fork off a thread for the email, which times out after a while, and keep your daemon running normally.

Samizdis
  • 1,591
  • 1
  • 17
  • 33
  • Thanks very much for the input! What would you recommend here? subprocess or threading module (which essentially my original question can now be broken down to I guess)? Charlie above recommended the latter, but I would like to get some input on both approaches if I am lucky enough. Thanks again. – mmcbride1 Dec 11 '16 at 02:46
  • In this case since you want to keep things simple and the email thread is IO-bound rather than CPU-bound (it's waiting for a response, not calculating anything) I'd pick threading. The links in Ebe's answers are good if you want to read more. – Samizdis Dec 12 '16 at 16:45
0

Possible approaches that come to mind:

My personal choice would be multiprocessing as you clearly mentioned independent processes; you wouldn't want a crashing thread to interrupt the other function.

You may also refer this before making your design choice: Multiprocessing vs Threading Python

Community
  • 1
  • 1
Ébe Isaac
  • 11,563
  • 17
  • 64
  • 97
0

Thanks everyone for the responses. It helped very much. I went with threading, but also updated the code to be sure it handled failing threads. Ran some regressions and found that the subsequent processes were no longer being interrupted by stalled connections and the log was being updated on a consistent schedule . Thanks again!!

mmcbride1
  • 31
  • 2