I have a very big csv file so that I can not read it all into memory. I only want to read and process a few lines in it. So I am seeking a function in Pandas which could handle this task, which basic python can handle well:
with open('abc.csv') as f:
line = f.readline()
# pass until it reaches a particular line number....
However, if I do this in pandas, I always read the first line:
datainput1 = pd.read_csv('matrix.txt', sep=',', header=None, nrows=1)
datainput2 = pd.read_csv('matrix.txt', sep=',', header=None, nrows=1)
I am looking for some easier way to handle this task in pandas. For example, if I want to read rows from 1000 to 2000. How can I do this quickly?
I want to use pandas because I want to read data into the dataframe.