13

I am trying to create a simple program that removes duplicate lines from a file. However, I am stuck. My goal is to ultimately remove all except 1 duplicate line, different from the suggested duplicate. So, I still have that data. I would also like to make it so, it takes in the same filename and outputs the same filename. When I tried to make the filenames both the same, it just outputs an empty file.

input_file = "input.txt"
output_file = "input.txt"

seen_lines = set()
outfile = open(output_file, "w")

for line in open(input_file, "r"):
    if line not in seen_lines:
        outfile.write(line)
        seen_lines.add(line)

outfile.close()

input.txt

I really love christmas
Keep the change ya filthy animal
Pizza is my fav food
Keep the change ya filthy animal
Did someone say peanut butter?
Did someone say peanut butter?
Keep the change ya filthy animal

Expected output

I really love christmas
Keep the change ya filthy animal
Pizza is my fav food
Did someone say peanut butter?
Mad Physicist
  • 107,652
  • 25
  • 181
  • 264
  • 2
    You open the file twice, since `input_file` and `output_file` are the same. The second time you open as read, which is where I think your problem is. So you won't be able to write. – busybear Dec 29 '18 at 23:23
  • @busybear Yes. Open your file as `r+` to read and write to the file at the same time (they will both work). – Eb946207 Dec 29 '18 at 23:24
  • Possible duplicate of [How might I remove duplicate lines from a file?](https://stackoverflow.com/questions/1215208/how-might-i-remove-duplicate-lines-from-a-file) – glennv Dec 29 '18 at 23:36

6 Answers6

6

The line outfile = open(output_file, "w") truncates your file no matter what else you do. The reads that follow will find an empty file. My recommendation for doing this safely is to use a temporary file:

  1. Open a temp file for writing
  2. Process the input to the new output
  3. Close both files
  4. Move the temp file to the input file name

This is much more robust than opening the file twice for reading and writing. If anything goes wrong, you will have the original and whatever work you did so far stashed away. Your current approach can mess up your file if anything goes wrong in the process.

Here is a sample using tempfile.NamedTemporaryFile, and a with block to make sure everything is closed properly, even in case of error:

from tempfile import NamedTemporaryFile
from shutil import move

input_file = "input.txt"
output_file = "input.txt"

seen_lines = set()

with NamedTemporaryFile('w', delete=False) as output, open(input_file) as input:
    for line in open(input_file, "r"):
        sline = line.rstrip('\n')
        if sline not in seen_lines:
            output.write(line)
            seen_lines.add(sline)
move(output.name, output_file)

The move at the end will work correctly even if the input and output names are the same, since output.name is guaranteed to be something different from both.

Note also that I'm stripping the newline from each line in the set, since the last line might not have one.

Alt Solution

If your don't care about the order of the lines, you can simplify the process somewhat by doing everything directly in memory:

input_file = "input.txt"
output_file = "input.txt"

with open(input_file) as input:
    unique = set(line.rstrip('\n') for line in input)
with open(output_file, 'w') as output:
    for line in unique:
        output.write(line)
        output.write('\n')

You can compare this against

with open(input_file) as input:
    unique = set(line.rstrip('\n') for line in input.readlines())
with open(output_file, 'w') as output:
    output.write('\n'.join(unique))

The second version does exactly the same thing, but loads and writes all at once.

Mad Physicist
  • 107,652
  • 25
  • 181
  • 264
  • just a question, this way of removing duplicates is very slow if there is over 100,000 lines. Is there a better way? Also still getting the same error. –  Dec 30 '18 at 00:26
  • @Mark. With that size, your I/O is the bottleneck. I doubt you can do much to speed it up. – Mad Physicist Dec 30 '18 at 00:28
  • @Mark. I've proposed an alternative – Mad Physicist Dec 30 '18 at 00:41
  • The lines in the file are already in the order I want it to be, your second version does it remove duplicates. If there are 2 duplicates, the top most duplicate is the one that shouldn't be deleted. –  Dec 30 '18 at 00:46
  • @Mark. The second version will not preserve the original order of the lines. – Mad Physicist Dec 30 '18 at 00:58
4

The problem is that you're trying to write to the same file that you're reading from. You have at least two options:

Option 1

Use different filenames (e.g. input.txt and output.txt). This is, at some level, easiest.

Option 2

Read all data in from your input file, close that file, then open the file for writing.

with open('input.txt', 'r') as f:
    lines = f.readlines()

seen_lines = set()
with open('input.txt', 'w') as f:
    for line in lines:
        if line not in seen_lines:
            seen_lines.add(line)
            f.write(line)

Option 3

Open the file for both reading and writing using r+ mode. You need to be careful in this case to read the data you're going to process before writing. If you do everything in a single loop, the loop iterator may lose track.

Jonah Bishop
  • 12,279
  • 6
  • 49
  • 74
1
import os
seen_lines = []

with open('input.txt','r') as infile:
    lines=infile.readlines()
    for line in lines:
        line_stripped=line.strip()
        if line_stripped not in seen_lines:
            seen_lines.append(line_stripped)

with open('input.txt','w') as outfile:
    for line in seen_lines:
        outfile.write(line)
        if line != seen_lines[-1]:
            outfile.write(os.linesep)

Output:

I really love christmas
Keep the change ya filthy animal
Pizza is my fav food
Did someone say peanut butter?
Bitto
  • 7,937
  • 1
  • 16
  • 38
  • This fixes the problem and is a good solution for small input files, but note that it will be quite slow (quadratic time) for large files due to the linear search through `seen_lines`. – Flight Odyssey Dec 29 '18 at 23:32
  • When I use this code, I see `Keep the change ya filthy animal` twice in the output? –  Dec 29 '18 at 23:34
  • @Mark I tested the code and i don't see it. Can you copy the code as it is and try again? may be you made some unintentional mistake while typing it. – Bitto Dec 29 '18 at 23:36
  • Wait, I think its because the last line has the `EOF` at the end of the line so it sees it as not a duplicate. I tested it. If the last line is a duplicate line, it always keeps it because of the `EOF`. Any way around this? I am on windows by the way –  Dec 29 '18 at 23:36
  • @Mark https://stackoverflow.com/questions/18857352/python-remove-very-last-character-in-file/18857381 might help. I can't say for sure. i am on Ubuntu. – Bitto Dec 29 '18 at 23:43
  • @FlightOdyssey O(n) (i.e. linear complexity) remains O(n) no matter how many times it's repeated. In order to have quadratic complexity you have have a linear loop inside a linear loop. – Paul Evans Dec 29 '18 at 23:56
  • @bitto I think I understand what the issue is. The last line doesn't have a `\n`. This it is considered as a different string –  Dec 29 '18 at 23:59
  • The last line does not have an EOF. Every line but the last has a newline at the end though, which you aren't stripping – Mad Physicist Dec 30 '18 at 00:31
  • @MadPhysicist Updated my answer. Can you guess why i did not have any problems in linux? – Bitto Dec 30 '18 at 00:45
  • Because windows has funny line endings – Mad Physicist Dec 30 '18 at 00:57
  • @PaulEvans That is exactly what we have here, a linear loop inside a linear loop. `for line in lines` is the outer loop, and `if line_stripped not in seen_lines:` is the inner loop. (Internally, Python must iterate over each line in `seen_lines` which is linear in the worst case, since it is stored as a list and not a set) – Flight Odyssey Dec 30 '18 at 08:07
  • @FlightOdyssey ah missed that, yes `seen_lines` should be a `set` then we'd have O(n log n). – Paul Evans Dec 30 '18 at 11:53
0

I believe this is the easiest way to do what you want:

with open('FileName.txt', 'r+') as i:
    AllLines = i.readlines()
    for line in AllLines:
        #write to file
  • At that point it would be much simpler to reopen for writing. If you're removing lines, there will be a tail left in the file. – Mad Physicist Dec 30 '18 at 00:14
0

Try the below code, using list comprehension with str.join and set and sorted:

input_file = "input.txt"
output_file = "input.txt"
seen_lines = []
outfile = open(output_file, "w")
infile = open(input_file, "r")
l = [i.rstrip() for i in infile.readlines()]
outfile.write('\n'.join(sorted(set(l,key=l.index))))
outfile.close()
U13-Forward
  • 69,221
  • 14
  • 89
  • 114
0

Just my two cents, in case you happen to be able to use Python3. It uses:

  • A reusable Path object which has a handy write_text() method.
  • An OrderedDict as data structure to satisfy the constraints of uniqueness and order at once.
  • A generator expression instead of Path.read_text() to save on memory.

# in-place removal of duplicate lines, while remaining order
import os
from collections import OrderedDict
from pathlib import Path

filepath = Path("./duplicates.txt")

with filepath.open() as _file:
    no_duplicates = OrderedDict.fromkeys(line.rstrip('\n') for line in _file)

filepath.write_text("\n".join(no_duplicates))
timmwagener
  • 2,368
  • 2
  • 19
  • 27