I am working with NASA-NEX-GDDP CMIP6 data. I currently have working code that individually opens and slices each file, however it takes days to download one variable for all model outputs and scenarios. My goal is to have all temperature and precipitation data for all models outputs and scenarios then apply climate indicators and make an ensemble with xclim.
url = 'https://ds.nccs.nasa.gov/thredds2/dodsC/AMES/NEX/GDDP-CMIP6/UKESM1-0-LL/ssp585/r1i1p1f2/tasmax/tasmax_day_UKESM1-0-LL_ssp585_r1i1p1f2_gn_2098.nc'
lat = 53
lon = 0
try:
with xr.open_dataset(url) as ds:
ds.interp(lat=lat,lon=lon).to_netcdf(url.split('/')[-1])
except Exception as e: print(e)
This code works but is very slow (days for one variable, one location). Wondering if there is a better, faster way? I'd rather not download the whole files as they are each 240 MB!
Update:
I have also tried the following to take advantage of dask parallel tasks and it is slightly faster but still on the order of days to complete for a full variable output:
def interp_one_url(path,lat,lon):
with xr.open_dataset(path) as ds:
ds = ds.interp(lat=lat,lon=lon)
return ds
urls = ['https://ds.nccs.nasa.gov/thredds2/dodsC/AMES/NEX/GDDP-CMIP6/UKESM1-0-LL/ssp585/r1i1p1f2/tasmax/tasmax_day_UKESM1-0-LL_ssp585_r1i1p1f2_gn_2100.nc',
'https://ds.nccs.nasa.gov/thredds2/dodsC/AMES/NEX/GDDP-CMIP6/UKESM1-0-LL/ssp585/r1i1p1f2/tasmax/tasmax_day_UKESM1-0-LL_ssp585_r1i1p1f2_gn_2099.nc']
lat = 53
lon = 0
paths = [url.split('/')[-1] for url in urls]
datasets = [interp_one_url(url,lat,lon) for url in urls]
xr.save_mfdataset(datasets, paths=paths)