57

I have a parquet file and I want to read first n rows from the file into a pandas data frame. What I tried:

df = pd.read_parquet(path= 'filepath', nrows = 10)

It did not work and gave me error:

TypeError: read_table() got an unexpected keyword argument 'nrows'

I did try the skiprows argument as well but that also gave me same error.

Alternatively, I can read the complete parquet file and filter the first n rows, but that will require more computations which I want to avoid.

Is there any way to achieve it?

Sanchit Kumar
  • 1,545
  • 1
  • 11
  • 19
  • Partial row-wise reads of Parquet files are now possible (using PyArrow as the backend), as shown here: https://stackoverflow.com/a/69888274/9962007 – mirekphd Dec 26 '21 at 10:37

7 Answers7

60

The accepted answer is out of date. It is now possible to read only the first few lines of a parquet file into pandas, though it is a bit messy and backend dependent.

To read using PyArrow as the backend, follow below:

from pyarrow.parquet import ParquetFile
import pyarrow as pa 

pf = ParquetFile('file_name.pq') 
first_ten_rows = next(pf.iter_batches(batch_size = 10)) 
df = pa.Table.from_batches([first_ten_rows]).to_pandas() 

Change the line batch_size = 10 to match however many rows you want to read in.

David Kaftan
  • 1,974
  • 1
  • 12
  • 18
30

After exploring around and getting in touch with the pandas dev team, the end point is pandas does not support argument nrows or skiprows while reading the parquet file.

The reason being that pandas use pyarrow or fastparquet parquet engines to process parquet file and pyarrow has no support for reading file partially or reading file by skipping rows (not sure about fastparquet). Below is the link of issue on pandas github for discussion.

https://github.com/pandas-dev/pandas/issues/24511

Sanchit Kumar
  • 1,545
  • 1
  • 11
  • 19
6

Querying Parquet with DuckDB

To provide another perspective, if you're comfortable with SQL, you might consider using DuckDB for this. For example:

import duckdb
nrows = 10
file_path = 'path/to/data/parquet_file.parquet'
df = duckdb.query(f'SELECT * FROM "{file_path}" LIMIT {nrows};').df()

If you're working with partitioned parquet, the above result wont include any of the partition columns since that information isn't stored in the lower level files. Instead, you should identify the top folder as a partitioned parquet datasets and register it with a DuckDB connector:

import duckdb
import pyarrow.dataset as ds
nrows = 10
dataset = ds.dataset('path/to/data', 
                     format='parquet',
                     partitioning='hive')
con = duckdb.connect()
con.register('data_table_name', dataset)
df = con.execute(f"SELECT * FROM data_table_name LIMIT {nrows};").df()

You can register multiple datasets with the connector to enable more complex queries. I find DuckDB makes working with parquet files much more convenient, especially when trying to JOIN between multiple Parquet datasets. Install it with conda install python-duckdb or pip install duckdb

Jvinniec
  • 616
  • 12
  • 18
3

Using pyarrow dataset scanner:

import pyarrow as pa

n = 10
src_path = "/parquet/path"
df = pa.dataset.dataset(src_path).scanner().head(n).to_pandas()
Winand
  • 2,093
  • 3
  • 28
  • 48
2

The most straighforward option for me seems to use dask library as

import dask.dataframe as dd
df = dd.read_parquet(path= 'filepath').head(10)
Pavel Prochazka
  • 695
  • 8
  • 13
0

As an alternative you can use S3 Select functionality from AWS SDK for pandas as proposed by Abdel Jaidi in this answer.

pip install awswrangler

import awswrangler as wr

df = wr.s3.select_query(
        sql="SELECT * FROM s3object s limit 5",
        path="s3://filepath",
        input_serialization="Parquet",
        input_serialization_params={},
        use_threads=True,
)
Manuel Montoya
  • 1,206
  • 13
  • 25
-2

Parquet file is column oriented storage, designed for that... So it's normal to load all the file to access just one line.

B. M.
  • 18,243
  • 2
  • 35
  • 54
  • 4
    Yes, parquet is column based. However, columns are divided into *row groups*. This means it is possible to only read a part of a parquet file (i. e. one row group). See https://parquet.apache.org/documentation/latest/ and https://arrow.apache.org/docs/python/parquet.html#finer-grained-reading-and-writing E. g. Apache Spark is able to read and process different row groups of the same parquet file on different machines in parallel. – mrteutone Nov 18 '21 at 14:13
  • However, row groups are pretty large. In Spark/Hadoop, the default group size is 128/256 MB – shay__ Apr 06 '22 at 07:45
  • 8
    Saying that it is normal isn't very helpful when you get a 10GB worth of file with a billion rows where just 1 million would be more than enough for your needs. – Alonzorz May 13 '22 at 13:41