0

I am trying to create a dataframe using the data in redshift table. But I am getting "Memory error" because the data I am fetching is huge in volume. how to sove this issue, (I found chunking is one option. How to implement chucking) Is there any other library useful for such situations ? The following is an example code

import pandas as pd
import psycopg2

conn = psycopg2.connect(host=host_name,user=usr,port=pt,password=pass,db_name=DB)

sql_query = "SELECT * FROM Table_Name"
df = pd.read_sql_query(conn,sql_query)

0 Answers0