i got a csv file: 22 Go size, 46000000 lines to save memory, csvfile is read and processed by chunk.
tp = pd.read_csv(f_in, sep=',', chunksize=1000, encoding='utf-8',quotechar='"')
for chunk in tp:
chunk;
but the file is malformed and raise an exception :
Error tokenizing data. C error: Expected 87 fields in line 15092657, saw 162
is there a way to trash this chunk and continue the loop with next chunk ?