3

I have a large csv 20 gb file that i want to read to DataFrame. I am using the following code

import pyodbc
import sqlalchemy
import pandas
chunks in pd.read_csv("test.csv", chunksize = 1000):
       print(list(chunks))

i am able to execute it without any memory issue but i want to unite to chunks to a single unit so that i can transfer the file to Postgres. I have created the engine for Postgres using Sqlalchemy engine.

I am new to python. Please suggest how to loop and write the chunks to database. I read somewhere that generators are also a good option. TIA.

Will-Iam
  • 95
  • 5

0 Answers0