-1

I'm very very new to Linux(coming from windows) and trying to write a script that i can hopefully execute over multiple systems. I tried to use Python for this but fount it hard too. Here is what i have so far:

cd /bin
bash
source compilervars.sh intel64
cd ~
exit #exit bash
file= "~/a.out"
if[! -f "$file"]
then
icc code.c
fi

#run some commands here...

The script hangs in the second line (bash). I'm not sure how to fix that or if I'm doing it wrong. Please advice.

Also, any tips of how to run this script over multiple systems on the same network?

Thanks a lot.

Samy
  • 49
  • 9
  • How are you running this script? What is it intended to do? Why are you trying to invoke bash in the middle, rather than running as bash in the first place? – Two-Bit Alchemist Jul 25 '16 at 16:15
  • not sure if you've got basics of how to write and run bash script.. have a look: http://stackoverflow.com/documentation/bash/300/hello-world#t=201607251613433976444 – Sundeep Jul 25 '16 at 16:15
  • 1
    Please submit your script to http://shellcheck.net/ before you come here to ask for humans to debug it for you. – tripleee Jul 25 '16 at 16:26

2 Answers2

2

What I believe you'd want to do:

#!/bin/bash

source /bin/compilervars.sh intel64

file="$HOME/a.out"

if [ ! -f "$file" ]; then
    icc code.c
fi

You would put this in a file and make it executable with chmod +x myscript. Then you would run it with ./myscript. Alternatively, you could just run it with bash myscript.

Your script makes little sense. The second line will open a new bash session, but it will just sit there until you exit it. Also, changing directories back and forth is very seldom required. To execute a single command in another directory, one usually does

( cd /other/place && mycommand )

The ( ... ) tells the shell that you'd like to do this in a sub-shell. The cd happens within that sub-shell and you don't have to cd back after it's done. If the cd fails, the command will not be run.

For example: You might want to make sure you're in $HOME when you compile the code:

if [ ! -f "$file" ]; then
  ( cd $HOME && icc code.c )
fi

... or even pick out the directory name from the variable file and use that:

if [ -f "$file" ]; then
  ( cd $(dirname "$file") && icc code.c )
fi

Assigning to a variable needs to happen as I wrote it, without spaces around the =.

Likewise, there needs to be spaces after if and inside [ ... ] as I wrote it above.

I also tend to use $HOME rather than ~ in scripts as it's more descriptive.

Kusalananda
  • 14,885
  • 3
  • 41
  • 52
  • This works! Thank you so much @Kusalananda. I didn't know that spaces matter (and maybe lines too?). Good to know. I will be looking in to that a lot more. – Samy Jul 25 '16 at 17:31
  • @Samy Lines matters, especially if you type things on them. ;-) Sorry, that was a joke. Empty lines do not matter. Spaces matter in places. Commands needs spaces after them. The `[` thing is actually a command which is usually built into the shell (see `man [`), but you could also type it as `/bin/[`, i.e. `if /bin/[ -f "$file"; then` (yes, the matching `]` is gone now, strange eh?), which is the same as `if test -f "$file"; then`. How may I confuse you further? ;-) – Kusalananda Jul 25 '16 at 17:45
1

A shell script isn't a record of key strokes which are typed into a terminal. If you write a script like this:

command1
bash
command2

it does not mean that the script will switch to bash, and then execute command2 in the different shell. It means that bash will be run. If there is a controlling terminal, that bash will show you a prompt and wait for a command to be typed in. You will have to type exit to quit that bash. Only then will the original script then continue with command2.

There is no way to switch a script to a different shell halfway through. There are ways to simulate this. A script can re-execute itself using a different shell. In order to do that, the script has to contain logic to detect that it is being re-executed, so that it can prevent re-executing itself again, and to skip some code that shouldn't be run twice.

In this script, I implemented such a re-execution hack. It consists of these lines:

#
# The #!/bin/sh might be some legacy piece of crap,
# not even up to 1990 POSIX.2 spec. So the first step
# is to look for a better shell in some known places
# and re-execute ourselves with that interpreter.
#

if test x$txr_shell = x ; then
  for shell in /bin/bash /usr/bin/bash /usr/xpg4/bin/sh ; do
    if test -x $shell ; then
       txr_shell=$shell
       break
    fi
  done
  if test x$txr_shell = x ; then
    echo "No known POSIX shell found: falling back on /bin/sh, which may not work"
    txr_shell=/bin/sh
  fi
  export txr_shell
  exec $txr_shell $0 ${@+"$@"}
fi

The txr_shell variable (not a standard variable, my invention) is how this logic detects that it's been re-executed. If the variable doesn't exist then this is the original execution. When we re-execute we export txr_shell so the re-executed instance will then have this environment variable.

The variable also holds the path to the shell; that is used later in the script; it is passed through to a Makefile as the SHELL variable, so that make build recipes use that same shell. In the above logic, the contents of txr_shell don't matter; it's used as Boolean: either it exists or it doesn't.

The programming style in the above code snippet is deliberately coded to work on very old shells. That is why test x$txr_shell = x is used instead of the modern syntax [ -z "$txr_shell" ], and why ${@+"$@"} is used instead of just "$@".

This style is no longer used after this point in the script, because the rest of the script runs in some good, reasonably modern shell thanks to the re-execution trick.

Kaz
  • 55,781
  • 9
  • 100
  • 149