-3

What is the easiest way to delete the last occurrence of a pattern in a file using a grep/awk/bash/etc? For example, I have a file in which the expression "hello world" appears multiple times and I would like to delete the entire line or just the last time "hello world" occurs. Thanks!

lenzinho
  • 391
  • 1
  • 11
  • Contrary to what you seem to believe, StackOverflow isn't a free coding service (or exam answering service). You're expected to show your code, along with relevant sample inputs, expected outputs, actual error msgs as well as your comments about where you are stuck. Please show your best effort to solve this problem (use the {} tool at the top left of the edit box to format code/data/output/errMsgs correctly), and people may be able to help you. Good luck. – shellter Jan 25 '17 at 04:02
  • And please read http://stackoverflow.com/help/how-to-ask , http://stackoverflow.com/help/dont-ask , http://stackoverflow.com/help/mcve and take the [tour](http://stackoverflow.com/tour) before posting more Qs here. Thanks. – shellter Jan 25 '17 at 04:03

3 Answers3

1

If you happen to have GNU coreutils (or you're willing to install them) you can use the occasionally useful tac command to flip the file for processing, allowing you to treat this problem as "remove the first occurrence of the pattern", which is somewhat simpler:

tac /path/to/file | awk '!found && /Hello, world/{found=1;next}1' | tac

Otherwise, you would need to do something like buffer all the lines of the file in memory so that you can print them all out at the end. Or you could process the file twice, the first time just looking for the line number to omit, but that requires that the data be in a file, rather than a stream you're piping into the command:

awk \
  -v line="$(grep -hn "hello world" /path/to/file | tail -n1 | cut -f1 -d:)" \
  "{NR != line}" /path/to/file
rici
  • 234,347
  • 28
  • 237
  • 341
1

You can do:

awk '/^hello world/ {max=NR} 
     {a[NR]=$0} 
     END{for (i=1;i<=NR;i++) {if (i!=max) print a[i]}}' file

Or, if the file size is a concern, traverse it twice and use grep to count the matches. Skip the last match with awk:

awk -v last=$(grep -c '^hello world' file) '/^hello world/ && ++cnt==last{ next } 1 ' file
dawg
  • 98,345
  • 23
  • 131
  • 206
1

You can read file twice, and can achieve this without array like below no need of tac

Input

[akshay@gold tmp]$ cat f
1   hai
2   hello
3   this
4   is
5   test
6   hello
7   this
8   is
9   test

Output

[akshay@gold tmp]$ awk 'last==FNR{next}FNR!=NR{print;next}/hello/{last=FNR}' f f 
1   hai
2   hello
3   this
4   is
5   test
7   this
8   is
9   test

Explanation

NR - It gives total number of records processed.

FNR - It gives total number of records for each input file.

awk ' 
      # while reading file first time it always evaluate to Boolean false
      # This part executes always but will skip line from
      # file while reading second time
      last==FNR{
                  next
      }

      # while reading file first time it always evaluate to Boolean false
      # This part prints line when reading file second time
      # immediately after print stop processing and go to next line 
      FNR!=NR{
                print
                next
      }

      # (1) executes first, read first file
      # search for word hello and save line number in variable last
      /hello/{
                last=FNR
      }
    ' f f                          # Here we are reading file twice
Akshay Hegde
  • 16,536
  • 2
  • 22
  • 36