I am currently working with .grib data using xarrays and I can open the datasets as shown in the xarray documentation:
ds_grib = xr.open_dataset("example.grib", engine="cfgrib")
However, for a larger dataset this process is escaling unreasonably with size. For example, for the same dataset but covering a larger time interval, while a 2mb dataset would be read almost instantly a 20mb dataset would take several minutes to load.
Is there a way to speed up this process? Or the most straightforward way would be to break the data in smaller samples and load-merge the different time intervals?
Thanks.