Problem of your example.
Trying your code on small scale, I notice even if you set dtype=int
, you are actually ending up with dtype=object
in your resulting dataframe.
header = ['a','b','c']
rows = 11
df = pd.DataFrame(columns=header, index=range(rows), dtype=int)
df.dtypes
a object
b object
c object
dtype: object
This is because even though you give the pd.read_csv
function the instruction that the columns are dtype=int
, it cannot override the dtypes being ultimately determined by the data in the column.
This is because pandas is tightly coupled to numpy and numpy dtypes.
The problem is, there is no data in your created dataframe, thus numpy defaults the data to be np.NaN
, which does not fit in an integer.
This means numpy gets confused and defaults back to the dtype being object
.
Problem of the object dtype.
Having the dtype set to object
means a big overhead in memory consumption and allocation time compared to if you would have the dtype set as integer or float.
Workaround for your example.
df = pd.DataFrame(columns=header, index=range(rows), dtype=float)
This works just fine, since np.NaN
can live in a float. This produces
a float64
b float64
c float64
dtype: object
And should take less memory.
More on how to relate to dtypes
See this related post for details on dtype:
Pandas read_csv low_memory and dtype options