I am using below python function to make a copy of file, which I am processing as part of data ingestion in Azure datafactory pipelines . This works for for small files, but fails to process huge files without returning any errors .On calling this function for 2.2 GB file , it stops the execution after writing 107 KB of data without throwing any exceptions .Can anyone point out what could be the issue here
with open(Temp_File_Name,encoding='ISO-8859-1') as a, open(Load_File_Name, 'w') as b:
for line in a:
if ' blah blah ' in line:
var=line[34:42]
data = line[0:120] + var + '\n'
b.write(data)
The input and output location , I have used here are files in Azure Blob storage .I am following this approach , as I need to read each line and perform some operation after reading it