My sample of data is really big (1.2 million documents), and I need to create and analyse data on only one "pandas dataframe". For now my code looks like this:
conn = psycopg2.connect("dbname=monty user=postgres host=localhost password=postgres")
cur = conn.cursor('aggre')
cur.execute("SELECT * FROM binance.zrxeth_ob_indicators;")
row = cur.fetchall()
df = pd.DataFrame(row,columns = ['timestamp', 'topAsk', 'topBid', 'CPA', 'midprice', 'CPB', 'spread', 'CPA%', 'CPB%'])
But it will take ages to localy upload everything in the variable df? What I tried so far was to do this:
for row in cur:
dfsub = pd.DataFrame(row,columns=['timestamp', 'topAsk', 'topBid', 'CPA', 'midprice', 'CPB', 'spread', 'CPA%', 'CPB%'])
df = df.concat([df,dfsub])
but it gives me the following error: DataFrame constructor not properly called!
any idea? Thanks!