I have a parquet file which is having a size of 350 GB. therefore I want to read the data in chunk.
I am aware of reading full parquet file and then convert them to pandas as below.
import pyarrow.parquet as pq
table = pq.read_table(filepath)
df = table.to_pandas(integer_object_nulls=True)
Not sure whether it is possible to read data chunk by chunk. Can someone please clarify on this!