2

Output of awk '{print $4}' is

b05808aa-c6ad-4d30-a334-198ff5726f7c
59996d37-9008-4b3b-ab22-340955cb6019
2b41f358-ff6d-418c-a0d3-ac7151c03b78
7ac4995c-ff2c-4717-a2ac-e6870a5670f0

I need to grep file st.log by these records. Something like awk '{print $4}' |xargs -i grep -w "pattern from awk" st.log I dont know how to pass pattern correctly?

linlav
  • 97
  • 2
  • 6
  • 5
    why can't you just use awk? – 123 Jun 13 '17 at 12:17
  • 2
    What is the structure of `st.log`? Some representative sample lines would help. If the GUIDs are always in a specific field, then the awk script could be made more efficient. – Tom Fenech Jun 13 '17 at 12:22
  • 1
    [edit] your question to include a [mcve] with concise, testable sample input and expected output so we're not guessing at what you're asking for help with. Right now we don't even know if you're running the awk command on `st.log`, same as your grep command, or something else which greatly impacts the potential solutions. Nor do we know if your files are Terrabytes or 10 lines each which also has an impact. Your question is extremely unclear as written and doesn't lend itself to us helping you come up with a good solution. – Ed Morton Jun 13 '17 at 12:56
  • Put your filter in `awk` so that only the requested elements are printed in the first place. `awk '$4 ~ /^b/ {print $4}`, for example. – chepner Jun 13 '17 at 13:28

2 Answers2

2

What about

awk '{print $4}' | grep -F -f - st.log

Credits to Eric Renouf, who noticed that -f - can be used for standard input instead -f <(cat), Note: -f /dev/stdin also works and avoids launching a new process.

or closer to the question to have the output ordered

awk '{print $4}' | xargs -i grep -F {} st.log 

maybe -w was not the option OP needed but -F

grep --help
-F, --fixed-strings       PATTERN is a set of newline-separated strings
-w, --word-regexp         force PATTERN to match only whole words

-w will match only line that contain exactly pattern

examples

grep -w . <<<a       # matches
grep -w . <<<ab      # doesn't match
grep -F . <<<a       # doesn't match
grep -F . <<<a.b     # matches
Kihats
  • 3,326
  • 5
  • 31
  • 46
Nahuel Fouilleul
  • 18,726
  • 2
  • 31
  • 36
  • In addition to what Ed says, you needn't use a process substitution here either, `grep -w -f - st.log` would read the patterns from stdin without the `cat` – Eric Renouf Jun 13 '17 at 12:27
  • @EdMorton: I don't agree, using awk, grep and sed are simple operation to be pipelined, otherwise one could use a more powerful language like perl – Nahuel Fouilleul Jun 13 '17 at 12:28
  • 1
    @TomFenech But if it's reading the output of `awk` as here you already get it. I'm just saying `grep -f -` is easier than `grep -f <(cat)` – Eric Renouf Jun 13 '17 at 12:29
  • @EricRenouf I tried what you suggest before and it didn't work – Nahuel Fouilleul Jun 13 '17 at 12:30
  • @NahuelFouilleul `awk '{print $4}' | grep -w -f - st.log` doesn't work? What about it didn't work? – Eric Renouf Jun 13 '17 at 12:32
  • Also, if you are using the process substitution you also wouldn't need the pipe, you could do `grep -w -f <(awk '{print $4}' somefile) st.log`, if `awk` is reading from a file and not `stdin` itself – Eric Renouf Jun 13 '17 at 12:33
  • @EricRenouf which version of grep accept `-f -` do you have the reference ? `echo a | grep -f - << – Nahuel Fouilleul Jun 13 '17 at 12:33
  • I've tested `-f -` with both GNU grep and BSD grep and had the expected results – Eric Renouf Jun 13 '17 at 12:38
  • I just tried the same `grep -f -` as you and used `strace` to figure out why your example doesn't work when using files actually does. It seems the herestring and the pipe step on each other for becoming `stdin` so you end up reading the herestring as your pattern list and then have an empty stream to read for the search space, so it doesn't match anything. Try the experiment again using actual files for the search space though and you should see it working as expected – Eric Renouf Jun 13 '17 at 13:28
  • @EricRenouf ok replacing `<< – Nahuel Fouilleul Jun 13 '17 at 13:37
  • Why not just process-substitute the awk command? Something like `grep -F -f <(awk '...') st.log`, for example. It may also be worth suggesting to write the `awk` output to a file, if it's to be used more than once. As Ed says in the comment to the question, so much depends on the scale of the inputs and outputs. – Toby Speight Jun 14 '17 at 07:30
  • You're right however here the answer answers just to the question – Nahuel Fouilleul Jun 14 '17 at 07:46
-2

May be something along these lines be helpful

How to process each line received as a result of grep command

awk '{print $4} | while read -r line; do 
    grep $line st.log
done
Learner
  • 569
  • 1
  • 5
  • 17
  • 1
    I didn't downvote, but this is quite a bad solution since you fork a new grep process for every line. You also should quote your variables. – 123 Jun 13 '17 at 12:53
  • 1
    Thanks for explaining. – Learner Jun 13 '17 at 18:55
  • 1
    See [why-is-using-a-shell-loop-to-process-text-considered-bad-practice](https://unix.stackexchange.com/questions/169716/why-is-using-a-shell-loop-to-process-text-considered-bad-practice) for some, but not all, of the issues with your script. – Ed Morton Jun 13 '17 at 20:37