1

I have 60 HUGE csv files (around 2.5 GB each). Each cover data for a month and has a 'distance' column I am interested in. Each has around 14 million rows.

I need to find the average distance for each month.

This is what I have so far:

import pandas as pd
for x in range(1, 60):
    df=pd.read_csv(r'x.csv', error_bad_lines=False, chunksize=100000)
    for chunk in df:
        print df["distance"].mean()

First I know 'print' is not a good idea. I need to assign the mean to a variable I guess. Second, what I need is the average for the whole dataframe and not just each chunk.

But I don't know how to do that. I was thinking of getting the average of each chunk and taking the simple average of all the chunks. That should give me the average for the dataframe as long as chunksize is equal for all chunks.

Third, I need to do this for all of the 60 csv files. Is my looping for that correct in the code above? My files are named 1.csv to 60.csv .

Sachith Muhandiram
  • 2,819
  • 10
  • 45
  • 94
Mishal Ahmed
  • 191
  • 2
  • 11
  • 2
    Keep track of the aggregate sum of distances and line count; then divide. Also if speed is an issue, consider looking at something like this: (http://stackoverflow.com/questions/3122442/how-do-i-calculate-the-mean-of-a-column) – hilberts_drinking_problem Dec 06 '16 at 01:58
  • You want to do the job only in Python or you can use Gnu/Linux tools like sed and awk ? – Chiheb Nexus Dec 06 '16 at 02:11
  • Sorry, not familiar with sed and awk. Would prefer Python, if possible. – Mishal Ahmed Dec 06 '16 at 02:39
  • Have a look at the 'usecols' and 'squeeze' arguments to pd.read_csv(). No sense loading columns that you are not using, right? – Alex O Dec 06 '16 at 02:53

2 Answers2

3

Few things I would fix based on how your file is named. I presume your files are named like "1.csv", "2.csv". Also remember that range is exclusive, and thus you would need to go to 61 in the range.

distance_array = []
for x in range(1,61):
   df = pd.read((str(x) + ".csv", error_bad_lines=False, chunksize=100000)
   for index, row in df.iterrows():
      distance_array.append(x['distance'])
print(sum(distance_array)/len(distance_array))
Andrew
  • 46
  • 2
0

I am presuming that the datasets are too large to load into memory as a pandas dataframe. If that is the case consider using a generator on each csv file, something similar too: Where to use yield in Python best?

As the overall result that you are after is the average you can accumulate the the total sum over each row and track how many rows with incremental step.

Community
  • 1
  • 1
Mike Dale
  • 61
  • 1
  • 2
  • 7