3

I have a program which dumps information into a named pipe like this:

cmd=open(destination,'w')
cmd.write(data)
cmd.close()

This works pretty well until the pipe (destination) disappears while my program is writing to it. The problem is that it keeps hanging on the write part(?) I was expecting some exception to happen, but that's not the case. How can I avoid this situation?

Thanks,

Jay

jay_t
  • 3,593
  • 4
  • 22
  • 27

2 Answers2

4

If the process reading from the pipe is not reading as fast as your writing, your script will block when it tries to write to the pipe. From the Wikipedia article:

"If the queue buffer fills up, the sending program is suspended (blocked) until the receiving program has had a chance to read some data and make room in the buffer. In Linux, the size of the buffer is 65536 bytes."

Luckly you have a few options:

  • The signal module will allow you to set an alarm to break out of the write call. After the prescribed amount of time, a SIGALRM signal will be sent to your process, if your handler for the signal raises an exception, it will break you out of the write.

  • With threading, you can spawn a new thread to handle the writing, killing it if it blocks for too long.

  • You can also use the fnctl module to make the pipe nonblocking (meaning the call will not wait, it will fail immediately if the pipe is full): Non-blocking read on a subprocess.PIPE in python.

  • Finally, you can use the select module to check if the pipe is ready for writing before attempting your write, just be careful, the check-write action is not idempotent (e.g. the pipe could fill up between the check and write).

Community
  • 1
  • 1
Zack Bloom
  • 8,309
  • 2
  • 20
  • 27
  • I will have a look into the 3rd option and if that doesn't work out, I think the second option would fit my purpose then. Signal is not an option as this writing happens in a thread already. The Fourth option sounds tricky as i'm possibly not the only process writing to that pipe. – jay_t Feb 04 '11 at 08:38
  • `select` is easy to use, but I believe that, in this case, it may not be perfectly adequate as the pipe may disappear between the `select()` call and the `write()` call. – Pedro Matiello Feb 04 '11 at 12:22
  • Yea Pedro, that's the problem. The OS could do a context switch right after the check, fill up the pipe, and switch back to your write (which will now fail). – Zack Bloom Feb 05 '11 at 04:42
  • Ops. It seems you have already mentioned that in your response. I guess I should not post just before going to bed. Sorry. :) – Pedro Matiello Feb 05 '11 at 13:16
  • I chose to do the writing in a dedicated thread and use a join with timeout on the thread object, which looks like it's doing the job. But now the problem is a bit, how to kill the thread? As killing threads appears to be not good practice, ... but in my case it's pretty clear. I do "del object" to kill the thread, ... I hope this is sufficient. – jay_t Feb 05 '11 at 16:51
  • Have you tried the `exit()` method? http://docs.python.org/library/thread.html#thread.exit – Pedro Matiello Feb 07 '11 at 13:12
0

I think that the signal module can help you. Check this example:

http://docs.python.org/library/signal.html#example

(The example solves an possibly non-finishing open() call, but can be trivially modified to do the same thing to your cmd.write() call.)

  • It indeed crossed my mind and it would be a way out, but it's currently not an option, because the writing happens in a blocking thread and apparently it's not possible to use signals in threads but only in the main() ... It would require quite some redesign for me the implement this. – jay_t Feb 04 '11 at 08:34
  • What about receiving the signal in the main thread and then calling `thread.exit()`? – Pedro Matiello Feb 04 '11 at 12:05
  • From what i have read is that signalling & threading should be avoided. Anyhow, I have decided to make another thread which handles the actual writing within the thread which makes the decision to write. I'm sure there are other ways too, but it works. – jay_t Feb 05 '11 at 16:57