I'm looking for a Pythonic way to iterate through a dataframe's index such that I can chunk a computationally heavy list into a series of smaller lists to run. The output of each chunk will be appended to a CSV in order to avoid resource limits.
For example, if I have some list who's length is prime, I'd like to split that list into a number of lists of relatively equal length, run the computations against that set, and and append the output of that set to a CSV. Rinse and repeat all the way down the index of the dataframe until all of the rows have been run against.
e.g.
- Run some function on the first 10,000 - store in csv
- Run on the 10,001 - 20,000 row - store in csv
- .....
- Run through row 111,376 - store in csv
- end.