0

First things first, I'm very new to bash scripting.

Well, I have a bash script to start the creation of a loopback server. Which does some bash commands, then runs 'expect' which then starts a program called wadm (expect handles the password wadm prompts for).

here's a quick overview:

  • do some bash cmds (prompt for username/pass)
  • compute some stuff
  • start expect shell within bash
    • expect starts the wadm with specific username
      • wadm prompts for password
      • expect enters the password
      • expect runs the wadm-specific cmds
      • quit wadm (with expect sending 'quit' to wadm')
    • quit expect (expect ends within bash script)
  • edit some files that the above wadm cmds created
  • start expect shell within bash
    • expect starts the wadm with specific username
      • wadm prompts for password
      • expect enters the password
      • expect runs the wadm-specific cmds (different cmds that rely on previous wadm cmds)
      • quit wadm (with expect sending 'quit' to wadm')
    • quit expect (expect ends within bash script)

What I want to do is keep expect and wadm running in the background (as to not start/quit wadm every time I need to do something in it) while I do some other stuff in bash.

Being new to bash scripting (also, I'm not that advanced in linux/unix) I thought of using job control to approach this, but according to this post (http://stackoverflow.com/questions/690266/why-cant-i-use-job-control-in-a-bash-script) job control is probably not the way to go. What other options are there for this kind of process?

glenn jackman
  • 238,783
  • 38
  • 220
  • 352
spud
  • 1

2 Answers2

1

Create two named pipes:

mkfifo wadm_stdin
mkfifo wadm_stdout

Start wadm in the background:

wadm <wadm_stdin >wadm_stdout &
wadm_id=$!

Script with expect as many times as desired (don't forget to log in the first time and to remove the quit from the end)

expect ... <wadm_stdout >wadm_stdin

When finished with wadm, wait for it to exit:

cat wadm_stdout >/dev/null &  # Read any output to prevent blocking on a full pipe
echo quit >wadm_stdin
wait $wadm_id
Dark Falcon
  • 43,592
  • 5
  • 83
  • 98
  • I'm not quite sure I follow this line: wadm wadm_stdout & my snippet where I run expect/wadm looks like this: `expect -c " stty -echo set password $PASSW spawn $WORKDIR/bin/wadm --user=$yourUID expect \"Please enter admin-user-password>\" send \"$password\r\" expect { -re \"Invalid user or password\" { exit 3 } \"wadm>\" { send \"copy-config --config=$oldUID-$MAXPORT $1-$NP\r\" } } expect \"wadm>\" send \"quit\r\" "` – spud Aug 26 '11 at 14:47
  • Is `wadm wadm_stdout &` saying that all input to `wadm_stdin` gets piped to wadm and all output from `wadm` gets piped to `wadm_stdout`? – spud Aug 26 '11 at 15:05
  • Yes, that is exactly what that means. – Dark Falcon Aug 26 '11 at 15:49
0

Usually you don't interact with script and then send it to background - you choose one. But of cource on linux everything is possible.

Main problem here is pipes(stdout/stderr): you can't just close then, script will most likely get error on write to pipe end exit itself. So, you need to redirect existing pipes. I don't know any standard tools that can do that, but it can be done with gdb.

Job control is a controversial topic too, imho. If you are not writing a commercial script it may be easier to enforce job control on every server has it then making some work-arounds. But in this case the only thing that you must know about it that parent shell will send SIGHUP to all child processes on exit, so it may be better to just turn it off in this case, or use traps to ignore it(like I do).

Now, I will describe one way to do what you want.

First, make sure that gdb can attach to running process run from the same user. It is enabled on many systems by default, but some allow only root to do it. Usually you can change that by setting /proc/sys/kernel/yama/ptrace_scope to 0 (it may be unsafe to do so). Run "gdb -p some_pid" to check if it works, "some_pid" - PID of any process run by your user.

As gdb seems to have some problems reading from stding, setup a file somewhere in your system (/usr/share/gdb_null_descr in my example) with following contents:

p dup2(open("/dev/null",0),1)
p dup2(open("/dev/null",0),2)

This will tell gdb to redirect STDOUT and STDERR to /dev/null in the process it is attached to (you can change it to any other files if you want to save it, but be carefull with permissions).

Now, everything is simple. For testing, create some simple daemon like daemon.sh in this example:

#!/bin/bash
success=0;
while [ "$success" -lt 1 ]
do
    echo "Give me username!";
    read username;
    echo "Give me password!";
    read password;
    if [[ "$username" = "root" && "$password" = "rootpass" ]]
    then
        success=1;
    else
        echo "Invalid username/password!";
    fi;
done;
echo "Logged in succesfully!";
echo -n > test_file;
loop_count=0;
while [ 1 ]
do
    echo "Still working";
    if [ "$loop_count" -eq 100 ]
    then
        loop_count=0;
        echo "Please, kill me, I am tired!";
    fi;
    let loop_count++;
    echo "$loop_count" >> test_file;
    sleep 1;
done;

Now, our expect script:

#!/usr/bin/expect
#Just never timeout
set timeout -1;
#Start bash process - we need it only to set trap
spawn bash;
#Set trap on SIGHUP to nothing - bash will just ignore this signal.
send "trap '' HUP;\n";
#Start our half-daemon by replacing this bash process (using exec).
send "exec ./daemon.sh;\n";
#ineract with half-daemon
expect *username!;
send "root\n";
expect *password!;
send "rootpass\n";
#Ok, interaction ends, redirect pipes
system gdb -p [exp_pid] --batch -x /usr/share/gdb_null_descr;

Now, run expect script and look at test_file - if there are new entries appearing each second, demonizing was done succesfully!

XzKto
  • 2,472
  • 18
  • 18