If you happen (like me) to be doing some hacky server maintenance by running a single insert query for 35k rows of data, upping the open_files_limit
is not the answer. Try breaking up your statement into multiple bite-sized pieces instead.
Here's some python code for how I solved my problem, to illustrate:
headers = ['field_1', 'field_2', 'field_3']
data = [('some stuff', 12, 89.3), ...] # 35k rows worth
insert_template = "\ninsert into my_schema.my_table ({}) values {};"
value_template = "('{}', {}, {})"
start = 0
chunk_size = 1000
total = len(data)
sql = ''
while start < total:
end = start + chunk_size
values = ", \n".join([
value_template.format(*row)
for row in data[start:end]
])
sql += template.format(headers, values)
start = end
Note that I do not recommend running statements like this as a rule; mine was a quick, dirty job, neglecting proper cleansing and connection management.