I'm writing entries into a DynamoDB table:
import time
...
for item in my_big_map.items():
Ddb_model(column1=item[0], column2=item[1], column_timestamp=time.time()).save()
I suspect this is slow so I was thinking about using a multi-threading strategy such as concurrent.futures
to write each entry to the table:
def write_one_entry(item):
Ddb_model(column1=item[0], column2=item[1], column_timestamp=time.time()).save()
with concurrent.futures.ThreadPoolExecutor() as executor:
executor.map(write_one_entry, my_big_map.items())
However, I've found this way of doing batch writes in PynamoDB's documentation. It looks like it's a handy way to accelerate write operation.
Does it also use a multi-threading strategy?
Is the PynamoDB implementation better than using concurrent.futures
to do bulk writes?