Your approach can be extended to multiple strings, though you should probably switch from backticks to modern $(...)
command substitution syntax.
grep -l "$string1" $(grep -l "$string2" $(grep -l "$string3" /path/to/files/*.txt))
(For the record, the historical backticks could be nested too, though it would get ugly;
grep -l "$string1" `grep -l "$string2" \`grep -l "$string3" /path/to/files/*.txt\``
but I'm not sure whether the quotes inside would survive, and you really should have stopped using this syntax in the previous millenium.)
You could also split the processes like this with xargs
:
grep -l "$string1" /path/to/files/*.txt |
xargs grep -l "$string2" |
xargs grep -l "$string3"
Scanning the files three times is pretty inefficient if these are large files, though. You could write a simple Awk script to scan each file only once.
awk 'FNR==1 { s=t=u=0 }
/string1/ { s=1 }
/string2/ { t=1 }
/string3/ { u=1 }
s && t && u { print FILENAME; nextfile }' /path/to/files/*.txt
If your Awk is really old it might not support nextfile
.
The logic should be straightforward; three booleans record for each string whether it has been seen in this file. If they are all true, we are done with this file and print its name to indicate success. If we reach a new file (where the per-file line number FNR
will be reset to 1) start over with all booleans set to zero (false).