I have a text file where a particular set of consecutive lines appear again and again. I need to trim all the duplicate occurrences and just print the first occurrence alone.
Input:
$ cat log_repeat.txt
total bytes = 0, at time = 1190554
time window = 0, at time = 1190554
BW in Mbps = 0, at time = 1190554
total bytes = 0, at time = 1190554
time window = 0, at time = 1190554
BW in Mbps = 0, at time = 1190554
total bytes = 0, at time = 1190554
time window = 0, at time = 1190554
BW in Mbps = 0, at time = 1190554
total bytes = 0, at time = 1190554
time window = 0, at time = 1190554
BW in Mbps = 0, at time = 1190554
total bytes = 0, at time = 1190554
time window = 0, at time = 1190554
BW in Mbps = 0, at time = 1190554
$
The below Perl solution works only when there are odd occurrences,
$ perl -0777 -pe 's/(^total.*)\1//gms ' log_repeat.txt
total bytes = 0, at time = 1190554
time window = 0, at time = 1190554
BW in Mbps = 0, at time = 1190554
$
and prints nothing when there are even occurrences. How do I get the first occurrence irrespective of the section repeating odd or even times.