3

I am currently running a script in a linux system. The script reads a csv of around 6000 lines as a dataframe. The job of the script is to turn a dataframe such as:

name       children
Bob        [Jeremy, Nancy, Laura]
Jennifer   [Kevin, Aaron]

to:

name       children                 childName
Bob        [Jeremy, Nancy, Laura]   Jeremy
Bob        [Jeremy, Nancy, Laura]   Nancy
Bob        [Jeremy, Nancy, Laura]   Laura
Jennifer   [Kevin, Aaron]           Kevin
Jennifer   [Kevin, Aaron]           Aaron

And write it to ANOTHER FILE (the original csv is to remain the same).

Basically add a new column and make a row for each item in a list. Note that I am dealing with a dataframe with 7 columns, but for demonstration purposes I am using a smaller examples. The columns in my actual csv are all strings except for two that are lists.

This is my code:


import ast
import os
import pandas as pd

cwd = os.path.abspath(__file__+"/..")
data= pd.read_csv(cwd+"/folded_data.csv", sep='\t', encoding="latin1")
output_path = cwd+"/unfolded_data.csv"

out_header = ["name", "children", "childName"]
count = len(data)
for idx, e in data.iterrows():
    print("Row ",idx," out of ",count)
    entry = e.values.tolist()
    c_lst = ast.literal_eval(entry[1])

    for c in c_lst :
        n_entry = entry + [c]
        if os.path.exists(output_path):
            output = pd.read_csv(output_path, sep='\t', encoding="latin1")
        else:
            output = pd.DataFrame(columns=out_header)

        output.loc[len(output)] = n_entry
        output.to_csv(output_path, sep='\t', index=False)

But I am getting the following error:

Traceback (most recent call last):
  File "fileUnfold.py", line 31, in <module>
    output.to_csv(output_path, sep='\t', index=False)
  File "/usr/local/lib/python3.5/dist-packages/pandas/core/generic.py", line 3020, in to_csv
    formatter.save()
  File "/usr/local/lib/python3.5/dist-packages/pandas/io/formats/csvs.py", line 172, in save
    self._save()
  File "/usr/local/lib/python3.5/dist-packages/pandas/io/formats/csvs.py", line 288, in _save
    self._save_chunk(start_i, end_i)
  File "/usr/local/lib/python3.5/dist-packages/pandas/io/formats/csvs.py", line 315, in _save_chunk
    self.cols, self.writer)
  File "pandas/_libs/writers.pyx", line 75, in pandas._libs.writers.write_csv_rows
MemoryError

Is there another way to do what I want to do without getting this error?

EDIT: csv file if you want to have a look https://media.githubusercontent.com/media/lucas0/Annotator/master/annotator/data/folded_snopes.csv

EDIT2: I am currently using

with open(output_path, 'w+') as f:
            output.to_csv(f, index=False, header=True, sep='\t')

And around the 98th row the program starts slowing down considreably. I am pretty sure this is because I am reading the file over and over again as it gets larger. How can I just append a row to the file without reading it?

EDIT3: Here is the actual code that I am using to deal with the data linked in my first edit. This might make it easier to answer.

import ast
import os
import pandas as pd

cwd = os.path.abspath(__file__+"/..")
snopes = pd.read_csv(cwd+"/folded_snopes.csv", sep='\t', encoding="latin1")
output_path = cwd+"/samples.csv"

out_header = ["page", "claim", "verdict", "tags", "date", "author","source_list","source_url"]
count = len(snopes)
for idx, e in snopes.iterrows():
    print("Row ",idx," out of ",count)
    entry = e.values.tolist()
    src_lst = ast.literal_eval(entry[6])

    for src in src_lst:
        n_entry = entry + [src]
        if os.path.exists(output_path):
            output = pd.read_csv(output_path, sep='\t', encoding="latin1")
        else:
            output = pd.DataFrame(columns=out_header)

        output.loc[len(output)] = n_entry
        with open(output_path, 'w+') as f:
            output.to_csv(f, index=False, header=True, sep='\t')
Mohamad Moustafa
  • 479
  • 5
  • 19
  • 4
    7 columns and 6000 rows should not cause memory issues on any reasonable PC these days. You haven't given an easily testable example but I wouldn't be surprised if it's something like `n_entry = entry + [c]` ending up with lists within lists that blows up the memory – roganjosh Jun 29 '19 at 20:23
  • @roganjosh I added a link to the data if you want to have a look – Mohamad Moustafa Jun 29 '19 at 20:28
  • Is there any real reason to use pandas here? You can probably accomplish this extremely efficiently using the standard library `csv` module. – juanpa.arrivillaga Jun 29 '19 at 20:33
  • @juanpa.arrivillaga some of the data itself has tabs. I am an intern and my supervisor (who is the one who gave me this data) told me to just use pandas as it can deal with those problems. I was using normal open and write, but that didnt work properly because data around the 1000th line isnt properly formatted. – Mohamad Moustafa Jun 29 '19 at 20:35
  • @MohamadMoustafa Could you delete as temporary solution your last row by the aid of df = df[:-1] and check with my answer? – Mahsa Hassankashi Jun 29 '19 at 20:38
  • Um, the `csv` module handles all of these issues. I didn't say to use `file.write` you should use the `csv` module. Using pandas is like using a sledgehammer to seat a fly – juanpa.arrivillaga Jun 29 '19 at 20:39
  • as a side comment - the code will benefit greatly if you split reading, modifying and writing files to 3 distinct parts or functions. that way you can even debug what fails on a dataframe – Evgeny Jun 29 '19 at 20:39
  • The error is in to_csv line. So the error is in writing. – Mohamad Moustafa Jun 29 '19 at 20:41
  • Let's do with open outside of upper for loop, because you filled your data frame and you just need to write it on csv. – Mahsa Hassankashi Jun 29 '19 at 20:46
  • I still need to add data to the datarframe after reading it from "data" path – Mohamad Moustafa Jun 29 '19 at 20:49
  • I meant is it possible according to your program, read all and process then write, because while reading and writing maybe cause this problem and slowing. – Mahsa Hassankashi Jun 29 '19 at 20:51
  • How about using `chunksize`? https://stackoverflow.com/questions/38531195/writing-panda-dataframes-to-csv-file-in-chunks – Mark Moretto Jun 29 '19 at 20:52
  • I wanna rewrite it so that it appends. It seems that keeping such a large csv in program memory isnt a good idea. I plan on using this script for much larger files (more than 10k rows). – Mohamad Moustafa Jun 29 '19 at 20:53
  • @MohamadMoustafa I updated the answer, you can open one file process it and write into another by appending a row to csv as you asked in your updated question. – Mahsa Hassankashi Jun 29 '19 at 21:00

2 Answers2

1

Try with open in order to bring it to memory, maybe resolve it.

How can I just append a row to the file without reading it?

from pathlib import Path

output_path= Path("/yourfolder/path")
with open(path1, 'w',  newline='') as f1, open(path2, 'r') as f2:
    file1= csv.writer(f1)
    #output.to_csv(f, header=False, sep=';') 
    file2 = csv.reader(f4)
    i = 0
    for row in file2:
        row.insert(1,output[i])
        file1.writerow(row)
        i += 1
Mahsa Hassankashi
  • 2,086
  • 1
  • 15
  • 25
0

I stopped reading the output file, and stopped writing for every source. Instead I make a dataframe with the new rows for each row of the input data then append that to the samples.csv.

Code:

import ast
import os
import pandas as pd

cwd = os.path.abspath(__file__+"/..")
snopes = pd.read_csv(cwd+"/folded_snopes.csv", sep='\t', encoding="latin1")
output_path = cwd+"/samples.csv"

out_header = ["page", "claim", "verdict", "tags", "date", "author","source_list","source_url"]
count = len(snopes)
is_first = True

for idx, e in snopes.iterrows():
    print("Row ",idx," out of ",count)
    entry = e.values.tolist()
    src_lst = ast.literal_eval(entry[6])
    output = pd.DataFrame(columns=out_header)
    for src in src_lst:
        n_entry = entry + [src]
        output.loc[len(output)] = n_entry

    output.to_csv(output_path, sep='\t', header=is_first, index=False, mode='a')
    is_first = False
Mohamad Moustafa
  • 479
  • 5
  • 19