I didn't find any related thread.
I have a not so big Excel file (100 MB), but quite large (ie 930 columns for 35k rows), and that's my problem. Excel read this file in a second, but pandas took at least 10-20 minutes on my computer. I tried the following:
- not infering type, by giving dtype parameter.
- limit columns, by using usecols
- iterate over rows, by using nrows and skiprows in a loop
I can not convert this excel into a csv.
This is my code so far:
df = pd.read_excel("rei_2018/REI_2018.xlsx", engine = "openpyxl", dtype = str, usecols=['H11'], nrows=200)
edit 1:
data : https://www.impots.gouv.fr/portail/www2/fichiers/statistiques/base_de_donnees/rei/rei_2018.zip
I tried running the following command on the above data (thus limiting to 200 row), it hooks exactly 760seconds.
df = pd.read_excel("rei_2018/REI_2018.xlsx", engine = "openpyxl", dtype = str, usecols=['H11'], nrows=200)
- pandas versions (Windows 64bits with Anaconda, on 8Gb Intel i5 8265U):
pd.show_versions()
INSTALLED VERSIONS
------------------
commit : b5958ee1999e9aead1938c0bba2b674378807b3d
python : 3.7.6.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.18362
machine : AMD64
processor : Intel64 Family 6 Model 142 Stepping 12, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : None.None
pandas : 1.1.5
numpy : 1.18.1
pytz : 2019.3
dateutil : 2.8.1
pip : 20.0.2
setuptools : 45.2.0.post20200210
Cython : 0.29.15
pytest : 5.3.5
hypothesis : 5.5.4
sphinx : 2.4.0
blosc : None
feather : None
xlsxwriter : 1.2.7
lxml.etree : 4.3.5
html5lib : 1.0.1
pymysql : None
psycopg2 : None
jinja2 : 2.11.1
IPython : 7.12.0
pandas_datareader: None
bs4 : 4.8.2
bottleneck : 1.3.2
fsspec : 0.6.2
fastparquet : None
gcsfs : None
matplotlib : 3.1.3
numexpr : 2.7.1
odfpy : None
openpyxl : 3.0.3
pandas_gbq : None
pyarrow : 0.13.0
pytables : None
pyxlsb : None
s3fs : None
scipy : 1.4.1
sqlalchemy : 1.3.13
tables : 3.6.1
tabulate : None
xarray : None
xlrd : 1.2.0
xlwt : 1.3.0
numba : 0.48.0