I have a large data set with global latitudes and longitudes. However, I am only interested in looking at a specific region, so I want to filter out all lat/lons that are outside of this region. The problem is that I am using if statements to parse the data, however, this takes too long. Is there a faster way to accomplish this?
The data comes from a netCDF file, and can be stored in a dictionary. I only want latitudes between 10 degrees N and 80 degrees North, and longitude between -170 degrees and -50 degrees. Here is what I have tried so far:
ret_dict = {}
with Dataset(filename,'r') as fid:
ret_dict['time'] = fid.variables['timeObs'][:]
sort_order = np.argsort(ret_dict['time'])
lat1 = [i for i in fid.variables['latitude'][:][sort_order] if fid.variables['latitude'][:][sort_order] > 10 ]
lat2 = [i for i in lat1 if lat1 < 80]
The above code can be repeated for longitudes. However, this is too slow with my large amount of data. It also doesn't give me the indices so that I make sure I keep the original latitude and longitude pairs. How can I quickly truncate the data for all variables?
EDIT: The answer below is correct for the first part of the question, however I am also trying to truncate other variables using the indices of the filtered latitude. I am trying:
lon = [j for i,(j,i) in zip(fid.variables['longitude'][:],fid.variables['longitude']) if 10<i<80]
However I am getting the error: ***TypeError: 'numpy.float32' object is not iterable