I found a way to solve this issue.
Yes, converting date to char with the appropriate formatting would work.
But, in my case, SQL statement are provided by another module and I need to process more than a hundred tables.
So, I decided to work with datas contained in the ResultProxy object returned by SqlAlchemy by the execute() method.
I fetch a table by chunks of 1000 rows (chunk is classic type list) at a time.
But those rows are kind of tuples (more precisely an SqlAlchemy RowProxy object) and it cannot be modified.
So, I had a treatment to cast them in ordered dictionnaries and update the chunk list.
It's important to use 'collections.OrderedDict' because it keeps fields order.
With a classical dic, then field labels and values might not match.
Now, my chunk is ready for all kind of treatments (change Dates to strings with appropriate formatting, substitute char in VARCHAR strings, etc and so on...). Dictionary structure is perfect for this.
Note, that before writing, OrderedDic rows in the chunk list have to be cast back.
Here is a simplified example :
result_proxy = connection.execute(request)
while True:
chunk = self.result_proxy.fetchmany(1000)
if not chunk:
break
# treatments comes here after :
# 1- transform into a dic in order to be able to modify
for i, row in enumerate(chunk):
chunk[i] = OrderedDict(row)
# 2- clean dates
for i, row_odic in enumerate(chunk):
for item in row_odic:
if(type(row_odic[item]) is datetime.datetime):
row_odic[item] = str(row_odic[item].strftime("%d/%m/%Y"))
chunk[i] = row_odic
# Other data treatment
# cast back for it to look like a classical result :
for c, row_odic in enumerate(chunk):
self.chunk[c] = row_odic.values()
# finally write row_odic.values in the csv file
I am not sure if it's the most efficient solution but performances look good.
I have a version of this treatment (same volume of datas), but using the Pandas library, that is a bit longer to execute.