I have a file with some duplicates, i want to find the duplicate and write to another file where duplicates are grouped together and previous lines are re-arranged. Lines are in groups of 2, so if i have duplicate in line 2 and 10, i get rid of line
line 1= "string1"
line2 (possibly_common_string) or string 2
....
line9 string9
line10 (possibly_common_string) or string 110
If theres no duplicates, i want to write as it is, if theres duplciates, i want to write to another file as-
line1 = string1
line2= common string- this was string in line 2. Old line 10 deleted.
line3= string 9 -> line 9 moved up.
line4= old line 5.
I am thinking of reading the whole file, looking for duplicates, but something like this loses duplicates without giving me the index from which to move from
How might I remove duplicate lines from a file?
Can I grab the index of the duplicate line?