7

I have a huge text file (~1.5GB) with numerous lines ending with ".Ends".
I need a linux oneliner (perl\ awk\ sed) to find the last place '.Ends' appear in the file and add a couple of lines before it.

I tried using tac twice, and stumbled with my perl:

When I use:
tac ../../test | perl -pi -e 'BEGIN {$flag = 1} if ($flag==1 && /.Ends/) {$flag = 0 ; print "someline\n"}' | tac
It first prints the "someline\n" and only than prints the .Ends The result is:

.Ends
someline

When I use:
tac ../../test | perl -e 'BEGIN {$flag = 1} print ; if ($flag==1 && /.Ends/) {$flag = 0 ; print "someline\n"}' | tac
It doesn’t print anything.

And when I use:
tac ../../test | perl -p -e 'BEGIN {$flag = 1} print $_ ; if ($flag==1 && /.Ends/) {$flag = 0 ; print "someline\n"}' | tac
It prints everything twice:

.Ends
someline
.Ends

Is there a smooth way to perform this edit?
Don't have to be with my solution direction, I'm not picky...
Bonus - if the lines can come from a different file, it would be great (but really not a must)

Edit
test input file:

gla2 
fla3 
dla4 
rfa5 
.Ends
shu
sha
she
.Ends
res
pes
ges
.Ends  
--->
...
pes
ges
someline
.Ends
# * some irrelevant junk * #
user2141046
  • 862
  • 2
  • 7
  • 21
  • You're right. Done. – user2141046 Nov 19 '22 at 20:56
  • will the last line of the file always end with `.Ends`? – markp-fuso Nov 19 '22 at 20:58
  • No. there are various other lines after the last .Ends, but I don't care about these – user2141046 Nov 19 '22 at 20:59
  • 2
    while you may not care about them (lines after the last `.Ends`) it would matter when coming up with a solution, ie, it's easier to *always* replace the last line – markp-fuso Nov 19 '22 at 21:06
  • I'm certain it's easier, but it's not relevant - all the lines after the last .Ends are comments and information, nothing functional, so the insertion must be within the .Ends bound. – user2141046 Nov 19 '22 at 21:14
  • 1
    Why do you need an automated function to edit "a file" in one place? Sounds like all you need to do is use a text editor with a search function. – TLP Nov 19 '22 at 22:56
  • 1
    Regarding `it's not relevant` - yes, it is. If you don't state in your question that there could be lines after the last `.Ends` and don't include lines after the last `.Ends` in your example then someone trying to help you might reasonably create and test a solution that relies on `.Ends` being the last line and thereby waste their time and, to a much lesser extent, yours. – Ed Morton Nov 20 '22 at 00:28
  • You added some white space to the end of the last `.Ends` line in your input now - can that really be present or is it a mistake? – Ed Morton Nov 20 '22 at 11:43
  • 2 whitespaces, to skip line. theoretically they can also exist in the input (nobody promised it will be ^\.Ends$), but I just wanted to have the added lines, as you requested above. I'll remove them if skip line can be taken without them – user2141046 Nov 20 '22 at 11:56
  • You said you wanted to find `lines ending with ".Ends"`, not `lines ending with ".Ends" possibly followed by spaces or other characters`. Does this mean the lines might also be `foobar.Ends` or `foo.Ends.bar` or other sequences of characters with `.Ends` in the middle? I don't know what `2 whitespaces, to skip line.` and `if skip line can be taken without them` means. – Ed Morton Nov 20 '22 at 11:59

6 Answers6

6

Assuming that the last instance of that phrase is far down the file it helps performance greatly to process the file from the back, for example using File::ReadBackwards.

Since you need to add other text to the file before the last marker then we have to copy the rest of it so to able to put it back after the addition.

use warnings;
use strict;
use feature 'say';
use Path::Tiny;
use File::ReadBackwards;
    
my $file = shift // die "Usage: $0 file\n"; 

my $bw = File::ReadBackwards->new($file);

my @rest_after_marker; 

while ( my $line = $bw->readline ) { 
    unshift @rest_after_marker, $line;
    last if $line =~ /\.Ends/;
}
# Position after which to add text and copy back the rest
my $pos = $bw->tell;    
$bw->close;

open my $fh, '+<', $file or die $!;    
seek $fh, $pos, 0;
truncate $fh, $pos;    
print $fh $_ for path("add.txt")->slurp, @rest_after_marker;

New text to add before the last .Ends is presumably in a file add.txt.

The question remains of how much file there is after the last .Ends marker? We copy all that in memory, to be able to write it back. If that is too much, copy it to a temporary file instead of memory, then use it from there and remove the file.

zdim
  • 64,580
  • 5
  • 52
  • 81
  • Note, this edits the input file in place. – zdim Nov 19 '22 at 23:35
  • This isn't a one-liner. Code seems valid (and I really prefer in-place editing), but that's not what I asked for... – user2141046 Nov 20 '22 at 07:52
  • 1
    @user2141046 Well, yeah ... I just removed a note on that, which I had in text, since I consider it a bit irrelevant in general. (Also, people often mention it only to turn out that it doesn't matter -- and some other requirements here are unclear.) This code does exactly what's asked and is about as efficient as possible, and that may matter on a Gig-and-a-half files. But feel free to discard if it being a "one"-liner matters (rhis can of course be shortened and turned into a command-line program but that'd be misplaced in my opinion). I hope it's still of use to others. – zdim Nov 20 '22 at 08:14
  • I agree, and voted you up regardless. let it be for the greater good :) – user2141046 Nov 20 '22 at 09:04
  • @user2141046, Re "*This isn't a one-liner.*", Sure it is. Nothing stops you from putting it one one line. – ikegami Nov 20 '22 at 17:39
4

Using GNU sed, -i.bak will create a backup file with a .bak extension while saving the original file in-place

$ sed -Ezi.bak 's/(.*)(\.Ends)/\1newline\nnewline\n\2/' input_file
$ cat input_file
gla2
fla3
dla4
rfa5
.Ends
shu
sha
she
.Ends
res
pes
ges
.Ends
--->
...
pes
ges
someline
newline
newline
.Ends
HatLess
  • 10,622
  • 5
  • 14
  • 32
3

Inputs:

$ cat test.dat
dla4
.Ends
she
.Ends
res
.Ends
abc

$ cat new.dat
newline 111
newline 222

One awk idea that sticks with OP's tac | <process> | tac approach:

$ tac test.dat | awk -v new_dat="new.dat" '1;/\.Ends/ && !(seen++) {system("tac " new_dat)}' | tac
dla4
.Ends
she
.Ends
res
newline 111
newline 222
.Ends
abc

Another awk idea that replaces the dual tac calls with a dual-pass of the input file:

$ awk -v new_dat="new.dat" 'FNR==NR { if ($0 ~ /\.Ends/) lastline=FNR; next} FNR==lastline { system("cat "new_dat) }; 1' test.dat test.dat
dla4
.Ends
she
.Ends
res
newline 111
newline 222
.Ends
abc

NOTES:

  • both of these solutions write the modified data to stdout (same thing OP's current code does)
  • neither of these solutions modify the original input file (test.dat)
markp-fuso
  • 28,790
  • 4
  • 16
  • 36
  • nice! I really liked the definition of seen in the middle, also the call to system from the oneliner is new for me. Will keep the post open for a while longer, to see if anyone can suggest a trick for in-place editing, but your answer is working and is totally legit! Thanks. – user2141046 Nov 19 '22 at 21:27
  • wow, edit is interesting. will try that as well. – user2141046 Nov 19 '22 at 21:29
  • Thanks @markp-fuso both your answers work nicely. first method (with both `tac` commands) works slightly faster and does the better job. – user2141046 Nov 20 '22 at 10:12
  • `/.Ends/` would match a line that contains `FooEndsBar` and you can't rely on the output of `system("tac " new_dat)` appearing where you want it inside the output of the awk command that calls it (not sure exactly why, buffering maybe, but I've seen the called command output come after all of the awk output rather than in the middle of it), you'd need to call the command and use a while getline loop then print it from awk to robustly ensure the output order. – Ed Morton Nov 20 '22 at 11:50
  • I just tried and can't reproduce that `system()` issue I mentioned using `tac` in the middle of a large input/output stream so maybe it happens in some other context (pipes in the command?), idk, but I personally still wouldn't trust it. – Ed Morton Nov 20 '22 at 12:18
  • @EdMorton It worked for me on the 1.5GB file. in any case, can look for `/^.Ends/` and be certain I get what I need. – user2141046 Nov 20 '22 at 14:30
  • 1
    Things that aren't guaranteed to work usually do work until they don't. You can't test something that might not work, find it works in your test(s) and deduce from that that it'll always work. For example an awk loop like `for ( i in arr ) print i` will usually print `i` in some specific order but then sometimes it won't. Similarly `/^.Ends/` will match what you want but also strings you don't want, e.g. `BEnds`, so it'll probably do what you want for the data you're testing with but then it'll fail later with different data. – Ed Morton Nov 20 '22 at 17:11
  • In my [test](https://gist.github.com/ikegami/edbda23247d71288099fb4f6cbd2a654), your first solution is 50x slower than zdim's, and your second solution is 2x slower than your first. TLP's is off the chart slow. – ikegami Nov 22 '22 at 19:21
  • @ikegami that doesn't surprise me .... both answers are reading the entire source file twice; as for zdim's solution, again, doesn't surprise me ... gets to the row quickly (assuming near the end of the file); did you time the `ed` solution? – markp-fuso Nov 22 '22 at 20:18
1

Inputs:

$ cat test.dat
dla4
.Ends
she
.Ends
res
.Ends
abc

$ cat new.dat
newline 111
newline 222

One ed approach:

$ ed test.dat >/dev/null 2>&1 <<EOF
1
?.Ends
-1r new.dat
wq
EOF

Or as a one-liner:

$ ed test.dat < <(printf '%s\n' 1 ?.Ends '-1r new.dat' wq) >/dev/null 2>&1

Where:

  • >/dev/null 2>&1 - brute force suppression of diagnostic and info messages
  • 1 - go to line #1
  • ?.Ends - search backwards in file for string .Ends (ie, find last .Ends in file)
  • -1r new.dat - move back/up 1 line (-1) in file and read in the contents of new.dat
  • wq - write and quit (aka save and exit)

This generates:

$ cat test.dat
dla4
.Ends
she
.Ends
res
newline 111
newline 222
.Ends
abc

NOTE: unlike OP's current code which writes the modified data to stdout, this solution modifies the original input file (test.dat)

markp-fuso
  • 28,790
  • 4
  • 16
  • 36
  • I believe your answer works (hack, both of your previous answers work and I still try to figure out the second), but this is not a one-liner. – user2141046 Nov 20 '22 at 07:49
  • @user2141046 re: `not a one-liner` ... an 'easy' solution is to place the code in a function wrapper, or place in a file and then source the file ... both methods can allow for a 'one-liner' solution at the command prompt – markp-fuso Nov 20 '22 at 13:42
  • to be honest ... I'm not an `ed` user so this answer took about 15 minutes to research and test but during that research I recall a few examples where a multi-line answer (like above) was collapsed into a single line ... something like (but don't quote me): `ed '1;?.Ends;-1r new.dat;wq' test.dat` – markp-fuso Nov 20 '22 at 13:45
  • net result ... in many cases a multi-liner *can* be reduced to a one-liner – markp-fuso Nov 20 '22 at 13:45
  • 1
    @user2141046 fwiw ... after a few minutes of chatting with Mr Google I was able to figure out how to write this one as a one-liner, too; answer updated – markp-fuso Nov 21 '22 at 14:39
  • Thanks, but I'll stick to your other answer, with the `awk`. as the rule says, if it works - don't fix it :) – user2141046 Nov 23 '22 at 07:10
1

Since you want to read the new lines from a file:

$ cat new
foo
bar
etc
$ tac file | awk 'NR==FNR{str=$0 ORS str; next} {print} $0==".Ends"{printf "%s", str; str=""}' new - | tac
gla2
fla3
dla4
rfa5
.Ends
shu
sha
she
.Ends
res
pes
ges
.Ends
--->
...
pes
ges
someline
foo
bar
etc
.Ends
# * some irrelevant junk * #

The above assumes the white space after .Ends on some lines of your posted sample input are a mistake. If they really can be present then change $0==".Ends" to /^\.Ends[[:space:]]*$/ or even /^[[:space:]]*\.Ends[[:space:]]*$/ if there might also be leading white space on those lines or just /\.Ends/ if there can be any chars before/after .Ends.

Ed Morton
  • 188,023
  • 17
  • 78
  • 185
  • Can you please explain what's the dash after "new" is doing in this awk command? Not familiar with single dash (and aliased `-` to `less` in my env, so want to prevent collisions) – user2141046 Nov 20 '22 at 14:25
  • 1
    In every shell script `-` in the context of input represents `stdin`. Don't alias it to `less` (I didn't know you COULD alias symbols!) or you'll run into problems. – Ed Morton Nov 20 '22 at 17:06
0

First let grep do the searching, then inject the lines with awk.

$ cat insert
new content
new content

$ line=$(cat insert)

$ awk -v var="${line}" '
      NR==1{last=$1; next} 
      FNR==last{print var}1' <(grep -n "^\.Ends$" file | cut -f 1 -d : | tail -1) file
rfa5 
.Ends
she
.Ends
ges
.Ends  
ges
new content
new content
.Ends
ges
ges

Data

$ cat file
rfa5 
.Ends
she
.Ends
ges
.Ends  
ges
.Ends
ges
ges
Andre Wildberg
  • 12,344
  • 3
  • 12
  • 29
  • Your answer relies on certain OS shenanigans that my OS (csh) isn't supporting - such as round braces and having the spaces saved when performing `set line=\`cat insert\``, so I can't check it. – user2141046 Nov 20 '22 at 09:17
  • 1
    @user2141046 please read some/all of the articles that https://www.google.com/search?q=csh+why+not will find. – Ed Morton Nov 20 '22 at 10:59
  • @EdMorton it's nothing I can control - that's what I'm given and what my tools require. I read these articles when I tried aliasing something with commas and ended up with 5 chars per each comma sign... – user2141046 Nov 20 '22 at 11:26
  • 2
    @user2141046 if your boss is forcing you to write scripts in csh, you should push back as it's hurting your productivity and ability to write concise, robust, efficient, portable solutions and I'd hope your boss would appreciate that feedback. I'm not aware of any tools that must call or be called from csh rather than any other shell but if they exist they are poorly thought out and should be replaced with other portable tools (or if shell scripts you should add a csh shebang at the top). – Ed Morton Nov 20 '22 at 11:29