Thousands of dfs of consistent columns are being generated in a for loop reading different files, and I'm trying to merge / concat / append them into a single df, combined
:
combined = pd.DataFrame()
for i in range(1,1000): # demo only
global combined
generate_df() # df is created here
combined = pd.concat([combined, df])
This is initially fast but slows as combined
grows, eventually becoming unusably slow. This answer on how to append rows explains how adding rows to a dict and then creating a df is most efficient but I can't figure out how to do that with to_dict
.
What's a good way to to this? Am I approaching this the wrong way?