I am used to R which offers quick functions to read CSV files column by column, can anyone propose a quick and efficient way to read large data (CSV for example) files in python? the ith column of a CSV file for example.
I have the following but it takes time :
import os,csv, numpy, scipy
from numpy import *
f= open('some.csv', 'rb')
reader = csv.reader(f, delimiter=',')
header = reader.next()
zipped = zip(*reader)
print( zipped[0] ) # is the first column
Is there a better way to read data (from large files) in python (at least as quick as R in terms of memory) ?