I find the solution problematic, if many columns are to be added to a large csv file iteratively.
A solution would be to accept the csv file to store a transposed dataframe. i.e. headers works as indices and vice versa.
The upside is that you don't waste computation power on insidious operations.
Here is operation times for regular appending mode, mode='a'
, and appending column approach for series with length of 5000 appended 100 times:

The downside is that you have to transpose the dataframe to get the "intended" dataframe when reading the csv for other purposes.
Code for plot:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import datetime as dt
col = []
row = []
N = 100
# Append row approach
for i in range(N):
t1 = dt.datetime.now()
data = pd.DataFrame({f'col_{i}':np.random.rand(5000)}).T
data.to_csv('test_csv_data1.txt',mode='a',header=False,sep="\t")
t2 = dt.datetime.now()
row.append((t2-t1).total_seconds())
# Append col approach
pd.DataFrame({}).to_csv('test_csv_data2.txt',header=True,sep="\t")
for i in range(N):
t1 = dt.datetime.now()
data = pd.read_csv('test_csv_data2.txt',sep='\t',header=0)
data[f'col_{i}'] = np.random.rand(5000)
data.to_csv('test_csv_data2.txt',header=True,sep="\t")
t2 = dt.datetime.now()
col.append((t2-t1).total_seconds())
t = pd.DataFrame({'N appendices':[i for i in range(N)],'append row':row,'append col':col})
t = t.set_index('N appendices')