We are writing the table with repartition and partitionBy. It's creating one parquet file for each partition.
df.repartition("TimeID").write.partitionBy("TimeID").parquet("/path/")
We want to limit the size of each parquet file to 200 MB max inside each partition. How can we achieve this If the size exceeds 200 MB we want to create another parquet file inside the same partition. For some of the partitions we see single parquet file sizes around 1GB to 2Gb due to the large data size for that day, I want to make sure that each parquet file size does not exceed 200 MB