I am using the following code for reading CSV file to a dictionary.
file_name = path+'/'+file.filename
with open(file_name, newline='') as csv_file:
csv_dict = [{k: v for k, v in row.items()}
for row in csv.DictReader(csv_file)]
for item in csv_dict:
call_api(item)
Now this is reading the files and calling the function for each of the row. As the number of rows increases, the number of calls also will increase. Also it is not possible to load all the contents to memory and split and call API from there as the size of the data is big. So I would like to follow an approach, so that the file will be read using limit
and offset
as in the case of SQL queries. But how can this be done in Python ? I am not seeing any option to specify the number of rows and skip rows in the csv documentation. Is someone can suggest a better approach also that will be fine.