4

Let's say we have a text file named text (doesn't matter what it contains) in current directory, when I run the command (in Ubuntu 14.04, bash version 4.3.11):

nocommand > text # make sure noommand doesn't exists in your system

It reports a 'command not found' error and it erases the text file! I just wonder if I can avoid the clobber of the file if the command doesn't exist. I try this command set -o noclobber but the same problem happens if I run:

nocommand >| text # make sure noommand doesn't exists in your system

It seems that bash redirects output before looking for specific command to run. Can anyone give me some advices how to avoid this?

Donald Duck
  • 8,409
  • 22
  • 75
  • 99
zhujs
  • 543
  • 2
  • 12

5 Answers5

5

Actually, the shell first looks at the redirection and creates the file. It evaluates the command after that.

Thus what happens exactly is: Because it's a > redirection, it first replaces the file with an empty file, then evaluates a command which does not exist, which produces an error message on stderr and nothing on stdout. It then stores stdout in this file (which is nothing so the file remains empty).

I agree with Nitesh that you simply need to check if the command exists first, but according to this thread, you should avoid using which. I think a good starting point would be to check at the beginning of your script that you can run all the required functions (see the thread, 3 solutions), and abort the script otherwise.

Community
  • 1
  • 1
Emilien
  • 2,385
  • 16
  • 24
3

This would write to file only if the pipe sends at least a single character:

nocommand | (
    IFS= read -d '' -n 1 || exit
    exec >myfile
    [[ -n $REPLY ]] && echo -n "$REPLY" || printf '\x00'
    exec cat
) 

Or using a function:

function protected_write {
    IFS= read -d '' -n 1 || exit
    exec >"$1"
    [[ -n $REPLY ]] && echo -n "$REPLY" || printf '\x00'
    exec cat
}

nocommand | protected_write myfile

Note that if lastpipe option is enabled, you'll have to place it on a subshell:

nocommand | ( protected_write myfile )

At your option you can also just summon subshell on the function by default:

function protected_write {
    (
        IFS= read -d '' -n 1 || exit
        exec >"$1"
        [[ -n $REPLY ]] && echo -n "$REPLY" || printf '\x00'
        exec cat
    )
}
  • () summons a subshell. A subshell is a fork and runs on a different process space. In x | y, y is also summoned by default in a subshell unless lastpipe option (try shopt lastpipe) is enabled.
  • IFS= read -d '' -n 1 waits for a single character (see help read) and would return zero code when it reads one which bypasses exit.
  • exec >"$1" redirects stdout to file. This makes everything that prints to stdout print to file instead.
  • Everything besides \x00 when read is stored in REPLY that is why we do printf '\x00' when REPLY has null (empty) value.
  • exec cat replaces the subshell's process with cat which would send everything that it receives to the file and finish the remaining job. See help exec.
konsolebox
  • 72,135
  • 12
  • 99
  • 105
  • 1
    +1; nicely done; seems like running the function in a subshell is the safest choice. If you switch to POSIX function syntax (your favorite), you can even replace `function protected_write{ ( ... } }` with just `protected_write() ( ... )` (i.e. use `(...)` in lieu of `{(...)}`). Think of all the keystrokes you'll save! – mklement0 Jul 25 '14 at 03:37
  • 1
    :) It should be noted, however, that your solution is not based on whether the command in the previous pipeline segment _succeeded_, but on whether (as you note) it sent _at least 1 byte to stdout_ - which may or may not be the same. Is there a way to detect the _former_ condition? – mklement0 Jul 25 '14 at 03:42
  • @mklement0 In C or some languages probably but not in the shell. – konsolebox Jul 25 '14 at 03:44
  • 1
    No I actually doubt it. See some threads like [this](http://stackoverflow.com/questions/20488574/output-redirection-using-fork-and-execl). Redirection happens before an application executes but not during. We may do all sanitation checks before running an application but that still may not guarantee that the application would run well, I think. – konsolebox Jul 25 '14 at 03:50
3

Write to a temporary file first, and only move it into place over the desired file if the command succeeds.

nocommand > tmp.txt && mv tmp.txt text

This avoids errors not only when nocommand doesn't exist, but also when an existing command exits before it can finish writing its output, so you don't overwrite text with incomplete data.

With a little more work, you can clean up the temp file in the event of an error.

{  nocommand > tmp.txt || { rm tmp.txt; false; } } && mv tmp.txt text

The inner command group ensures that the exit status of the outer command group is non-zero so that even if the rm succeeds, the mv command is not triggered.

A simpler command that carries the slight risk of removing the temp file when nocommand succeeds but the mv fails is

nocommand > tmp.txt && mv tmp.txt text || rm tmp.txt
chepner
  • 497,756
  • 71
  • 530
  • 681
1

If you do:

set -o noclobber

then

invalidcmd > myfile

if myfile exists in current path then you will get:

-bash: myfile: cannot overwrite existing file
anubhava
  • 761,203
  • 64
  • 569
  • 643
0

Check using "which" command

#!/usr/bin/env bash

command_name="npm2" # Add your command here
command=`which $command_name`

if [ -z "$command" ]; then #if command exists go ahead with your logic
        echo "Command not found"
else # Else fallback
        echo "$command"
fi

Hope this helps

Nitesh morajkar
  • 391
  • 1
  • 3
  • 11