2

I can't read a whole 5 GB CSV file in one go, but using Pandas' read_csv() with chuncksize set seems to be a fast and easy way:

import pandas as panda

def run_pand(csv_db):

    reader = panda.read_csv(csv_db, chunksize=5000)
    dup=reader.duplicated(subset=["Region","Country","Ship Date"])
    #and after i will write duplicates in new csv

As I understand it, reading in chunks will not let me find a duplicate if they are in different pieces, or will it still?

Is there a way to search for matches using a Pandas method?

Timus
  • 10,974
  • 5
  • 14
  • 28
allvolload
  • 21
  • 2
  • Does this answer your question? [Removing duplicates on very large datasets](https://stackoverflow.com/questions/52407474/removing-duplicates-on-very-large-datasets) – Timus Apr 15 '22 at 18:24
  • Nope, In those answers, duplicates were searched by turning the whole string into a hash and checked among themselves. But I need to check the 5,6,10,11,12 column. – allvolload Apr 19 '22 at 16:02
  • 1
    But can't you adapt that to your situation: Build a string out of the relevant columns and work with that? – Timus Apr 19 '22 at 17:36

0 Answers0