0

A: Why does it block?

B: How may I massage this slightly so that it will run without blocking?

#!/usr/bin/env python
import subprocess as sp
import os

kwds = dict(
    stdin=sp.PIPE,
    stdout=sp.PIPE,
    stderr=sp.PIPE,
    cwd=os.path.abspath(os.getcwd()),
    shell=True,
    executable='/bin/bash',
    bufsize=1,
    universal_newlines=True,
)
cmd = '/bin/bash'
proc = sp.Popen(cmd, **kwds)
proc.stdin.write('ls -lashtr\n')
proc.stdin.flush()

# This blocks and never returns
proc.stdout.read()

I need this to run interactively.

This is a simplified example, but the reality is I have a long running process and I'd like to startup a shell script that can more or less run arbitrary code (because it's an installation script).

EDIT: I would like to effectively take a .bash_history over several different logins, clean it up so it is a single script, and then execute the newly crafted shell script line-by-line within a shell stored within a Python script.

For example:

> ... ssh to remote aws system ...
> sudo su -
> apt-get install stuff
> su - $USERNAME
> ... create and enter a docker snapshot ...
> ... install packages, update configurations
> ... install new services, update service configurations ...
> ... drop out of snapshot ...
> ... commit the snapshot ...
> ... remove the snapshot ...
> ... update services ...
> ... restart services ...
> ... drop into a tmux within the new docker ...

This takes hours manually; it should be automated.

Brian Bruggeman
  • 5,008
  • 2
  • 36
  • 55
  • your example code that could be replaced with `check_output(['ls', '-lashtr'])` does not correspond to the question (it is too simplistic to be meaningful for the question described in the text). It is not easy to use `subprocess` for a dialog-based interaction with a child process. Forget about *"slightly"* in the general case. The text of the question (not code) is too broad: what does *"splits out the errors"* ([do you want to capture stdout/stderr separately?](http://stackoverflow.com/questions/31926470)). How do you find the boundaries between output of several commands? What is re-run? – jfs Aug 19 '15 at 04:36
  • I want an open process and leave it open. Communicate cannot help me with that as it closes the process. If you need more information, consider something like: docker run -i -t ubuntu /bin/bash; su - ; run script; check output; <-- store that whole thing to a log on a local machine. – Brian Bruggeman Aug 19 '15 at 13:55
  • Don't tell us what you think won't work: (1) you can be wrong e.g., you can run multiple commands using `.communicate()` (2) it doesn't tell us what your actual problem is. Answer the questions from my previous comment. – jfs Aug 19 '15 at 14:24
  • I'm not sure I can be more clear: I want a persistent shell where I can run arbitrary shell commands across a network specifically because I'm dealing with docker containers. I wanted a simple example to explain the issue. If you have a better way of explaining it, I'd be happy to have an edit. Communicate closes the file descriptors. – Brian Bruggeman Aug 19 '15 at 16:28
  • I don't see answers to the questions I've asked. It is upto you to help others to help you. – jfs Aug 19 '15 at 17:06
  • I want to segregate std{out,err} and then combine later. I want to run a bash script line-by-line within the python interpreter and then take action based on the error code or stderr or stdout. Communicate closes the file descriptors, and that doesn't help me run line by line. I've updated the initial question. – Brian Bruggeman Aug 19 '15 at 17:41
  • (1) It is not possible to separate std{out,err} if you want to [combine them later *while preserving order* in the general case](http://stackoverflow.com/q/31833897/4279). (2) follow the link in my first comment that shows how to use `pty` to read std{out,err} separately in "real-time". To provide input, pass the corresponding fd to `select` and use `os.write()`. (3) I don't see how it is possible to separate output from several commands if you don't know what the commands are and if they may change the shell prompt. You have to treat *each command* that might change the prompt specially. – jfs Aug 20 '15 at 03:55

2 Answers2

0

A: Why does it block?

It blocks because that's what .read() does: it reads all of the bytes until an end-of-file indication. Since the process never indicates end of file, the .read() never returns.

B: How may I massage this slightly (emphasis on slightly) so that it will run without blocking?

One thing to do is to cause the process to indicate end of file. A small change is to cause the subprocess to exit.

proc.stdin.write('ls -lashtr; exit\n')
Robᵩ
  • 163,533
  • 20
  • 239
  • 308
-1

This is an example form my another answer: https://stackoverflow.com/a/43012138/3555925, which did not use pexpect. You can see more detail in that answer.

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import os
import sys
import select
import termios
import tty
import pty
from subprocess import Popen

command = 'bash'
# command = 'docker run -it --rm centos /bin/bash'.split()

# save original tty setting then set it to raw mode
old_tty = termios.tcgetattr(sys.stdin)
tty.setraw(sys.stdin.fileno())

# open pseudo-terminal to interact with subprocess
master_fd, slave_fd = pty.openpty()

# use os.setsid() make it run in a new process group, or bash job control will not be enabled
p = Popen(command,
          preexec_fn=os.setsid,
          stdin=slave_fd,
          stdout=slave_fd,
          stderr=slave_fd,
          universal_newlines=True)

while p.poll() is None:
    r, w, e = select.select([sys.stdin, master_fd], [], [])
    if sys.stdin in r:
        d = os.read(sys.stdin.fileno(), 10240)
        os.write(master_fd, d)
    elif master_fd in r:
        o = os.read(master_fd, 10240)
        if o:
            os.write(sys.stdout.fileno(), o)

# restore tty settings back
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_tty)
Community
  • 1
  • 1
Paco
  • 411
  • 3
  • 9