I've written a simple utility which removes certain lines from a file. It reads a file into a list of lines (Python), and writes them back as a single concatenated string, new lines preserved; some lines are dropped, some get commented in the process; the percentage of change is negligible. Somehow diff presents me with a big red block before, and a big green block after. To a naked eye the resulting file looks quite good; I thought about some subtle difference with the tailing spaces or something like that, but is it really possible? Had I added something invisible to each line, every red line would have been followed by the corresponding green one. Or so I gather.
UPD:
Well, line endings is a certainty, I was told. The essentials of my code:
def check_file(path):
out_line = ""
with open(path, "r") as f_r:
for line in f_r.readlines():
drop_it, o_line = consume_line(line)
if drop_it:
pass
else:
out_line += o_line
with open(path, "w") as f_w:
f_w.write(out_line)
consume_line()
essentially returns its argument as is. It may be either scheduled for dropping, or uncommented/commented out, C++ style, in certain infrequent cases. No manual fiddling with line endings in any case.
No editor reports any change in the total number of lines if no line is dropped. The files originated and handled on Linux.