16

If a command fails in make, such as gcc, it exits...

gcc
gcc: fatal error: no input files
compilation terminated.
make: *** [main.o] Error 4

However, if I have a pipe the exit status of the last command in the pipe is taken. As an example, gcc | cat, will not fail because cat succeeds.

I'm aware the exit codes for the whole pipe are stored in the PIPESTATUS array and I could get the error code 4 with ${PIPESTATUS[0]}. How should I structure my makefile to handle a piped command and exit on failure as normal?


As in the comments, another example is gcc | grep something. Here, I assume the most desired behavior is still for gcc and only gcc to cause failure and not grep if it doesn't find anything.

jozxyqk
  • 16,424
  • 12
  • 91
  • 180
  • Taking a step back, can you avoid the pipeline altogether? – chepner Aug 14 '14 at 12:54
  • @chepner is that possible? I can't think of a way that doesn't use temp files or named pipes. `cat <( gcc )` still has the same issue. – jozxyqk Aug 14 '14 at 13:16
  • I though `gcc | cat` was just an example; I can't think of any reason to actually do that. If you need to save the output somewhere else, `gcc > ....` should work. If `cat` is placeholder for a more complicated command, there might be other options. – chepner Aug 14 '14 at 13:19
  • @chepner yes, `gcc | grep error` or something is probably a better example. – jozxyqk Aug 14 '14 at 13:22
  • 3
    If you want the build process to abort if `gcc` fails, I would simply redirect its output/error to a file, then process that file only if `gcc` succeeds. Just because you *can* use a pipe doesn't mean you *should*. – chepner Aug 14 '14 at 14:02

4 Answers4

17

You should be able to tell make to use bash instead of sh and get bash to have set -o pipefail set so it exits with the first failure in the pipeline.

In GNU Make 3.81 (and presumably earlier though I don't know for sure) you should be able to do this with SHELL = /bin/bash -o pipefail.

In GNU Make 3.82 (and newer) you should be able to do this with SHELL = /bin/bash and .SHELLFLAGS = -o pipefail -c (though I don't know if adding -c to the end like that is necessary or if make will add that for you even when you specify .SHELLFLAGS.

From the bash man page:

The return status of a pipeline is the exit status of the last command, unless the pipefail option is enabled. If pipefail is enabled, the pipeline's return status is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully. If the reserved word ! precedes a pipeline, the exit status of that pipeline is the logical negation of the exit status as described above. The shell waits for all commands in the pipeline to terminate before returning a value.

Etan Reisner
  • 77,877
  • 8
  • 106
  • 148
  • This works fine. I did need the `-c` flag. I guess one thing to note here is that things like `gcc | grep` might have issues with grep causing a failure if nothing was found. – jozxyqk Aug 14 '14 at 12:05
  • 2
    If just needed for a single line in make, [`set -o pipefail ; gcc | cat`](http://stackoverflow.com/a/19804002/1888983) would be equivalent. – jozxyqk Aug 14 '14 at 12:08
  • Yes, anything that globally modifies default state is going to have side-effects. – Etan Reisner Aug 14 '14 at 16:13
  • 2
    And personally I think anything that uses `gcc | grep` in a makefile rule context is already very much gone in the wrong direction. – Etan Reisner Aug 14 '14 at 16:14
10

I would go for pipefail. But if you really don't want (or if you want to only fail on the first process -- not in case of failure from the rest of the pipe):

SHELL=bash

all:
        gcc | cat ; exit "$${PIPESTATUS[0]}"
Denilson Sá Maia
  • 47,466
  • 33
  • 109
  • 111
Sylvain Leroux
  • 50,096
  • 7
  • 103
  • 125
1

Just add in the begin of your makefile the command:

SHELL=/bin/bash -o pipefail

Now you can, for example, generate the errors.err file from objects (1st rule) without being worried it would be overwritten by the executable (2nd rule).

%.o : %.c
    gcc $(CFLAGS) $(CPPFLAGS) $^ -o $@ 2>&1 | tee errors.err

%.x : %.o $(OBJECTS)
    gcc $(LDLIBS) $^ -o $@ 2>&1 | tee errors.err

Without it, make get no errors from rule 1, and run rule 2, overwriting it. You will end up with only a single line in errors.err stating that there are no object file to run gcc

gcc: error: program.o: No such file or directory
DrBeco
  • 11,237
  • 9
  • 59
  • 76
  • 1
    When the build is parallel (e.g. Make is called as `make -j`), the errors.err will get clobbered. The top-voted answer has already given the advice to use Bash’s pipefail. – Palec Jul 12 '15 at 17:16
  • The answer from Sylvain suggests `pipefail` but uses `SHELL=bash`. It took some knowledge to set this instead of using `setopt`. Anyway, I'll edit my answer to point the problem with parallelized call of `make`. Thanks. – DrBeco Jul 12 '15 at 18:20
  • `program.x errors.err: program.c` does **not** tell make that the command generates both outputs at the same time. It tells make that the command can be used to generate *either* of those files (but that they are each made on their own by it so running it more than once is correct). – Etan Reisner Jun 08 '16 at 11:25
  • Hi @EtanReisner, tx for commenting. Now, I didn't said it generates both at the same time, just that in my tests it prevented clobbering. Anyway I appreciate the commenting for the downvote. Without it its hard to understand what may be wrong. If you have anything else to share, please be welcome. Cheers. – DrBeco Jun 10 '16 at 03:14
  • My point was that it doesn't prevent clobbering and may actually *increase* the chances of it being clobbered (in case `errors.err` is used by more than one target). If you used `program.err` and used `%.x %.err: %.c` then you would actually be telling make that the rule generates both files at the same time and it would actually be safe for that usage. – Etan Reisner Jun 10 '16 at 11:14
  • Hi Etan, I was about to edit to take care of this hypotactic situation, but after re-reading the OP question, It would be off-topic. So, just for the sake of completeness, I'll make a note at the end, but I'll not expand with examples. – DrBeco Jun 10 '16 at 14:29
  • You probably did not get Etan's comment and the underlying semantic difference, @DrBeco. `%.x %.err: %.c` introduces a single rule with multiple products, while `program.x program.err: program.c` introduces two rules with a single product and the same recipe. – Palec Jun 11 '16 at 09:23
  • Ok, I see. I decided to delete that addendum to the question and let just the core. Maybe someone who would like to be more clear and complete would write another answer. This one now is simple. Unfortunately my life is a bit busy now to help more. Maybe in the future I can came back and help more. Thank you. – DrBeco Jun 12 '16 at 03:21
1

A reasonable and portable approach is to refactor your build jobs to use files instead of pipes. For example:

foo:
    gcc >$@.log
    grep success $@.log
    cat $@.log
    rm $@.log

Removing the log file after printing it is obviously not necessary; this is just a general template. The beef is the redirection to replace the pipeline. You could even refactor it to multiple recipes:

foo: foo.tmp foo.log
    grep success $@.log
    mv $< $@
%.tmp %.log:
    gcc -o $*.tmp >$*.log

Properly cleaning up the temporary artefacts and generally managing them is an obvious drawback of this approach.

tripleee
  • 175,061
  • 34
  • 275
  • 318
  • As I commented under another answer, `foo.tmp foo.log:` actually creates two rules with the same recipe, therefore the recipe would be run twice in the scenario you present. – Palec Jun 11 '16 at 12:57
  • @Palec: Thanks for the observation. I tried to change the recipe into a pattern rule but I'm not sure if it will fix this; not in a place where I can test. – tripleee Jun 11 '16 at 13:48
  • 1
    Fixed a copy-paste mistake. Have not tested, but this should work as expected. – Palec Jun 11 '16 at 15:40