I currently try to download s5p data via the sentielSat API. The first thing I noticed was, that the files - or at least their names - returned there do not always match the filenames given in OpenAccessHub.
However I then try to download all files for a given day to calc some sort of day-average with the code given below:
def retrieveS5PProducts(dateStr="2019-01-04", productType="L2__CH4___", fromTimeStr="00:00:00.000",
toTimeStr="23:59:00.00", processingMode="Offline"):
print("********************* download " + productType + "for" + dateStr + "********************")
api = SentinelAPI('*****', '******', 'https://s5phub.copernicus.eu/dhus/')
positionStr = "[{0}T{1}Z TO {0}T{2}Z]".format(dateStr, fromTimeStr, toTimeStr)
footprintWorld = "POLYGON(( -180.0 90.0, 180.0 90.0, 180.0 -90.0 , -180.0 -90.0, -180.0 90.0 ))"
products = api.query(footprintWorld,
platformname='Sentinel-5',
producttype=productType,
beginPosition=positionStr,
endPosition=positionStr,
# cloudcoverpercentage=(0, 30)
processingmode=processingMode
)
if products:
gdf = api.to_geodataframe(products)
print(gdf)
for id, uuid in enumerate(gdf["uuid"].values):
basePath = "./input/" + productType + "/" + dateStr + "/"
file = api.download(uuid, basePath)
This gives me the following output / files
title ... geometry
ed0a4bbf-72d2-4f89-952a-c1e16e2d7008 S5P_OFFL_L2__CO_____20230726T160737_20230726T1... ... MULTIPOLYGON (((-53.02507 -81.51363, -38.15521...
bd9adfa2-c035-4dcc-8004-0f13283ba859 S5P_OFFL_L2__CO_____20230726T142608_20230726T1... ... MULTIPOLYGON (((-27.65645 -81.53316, -12.75405...
387f45cf-08c3-4dcc-ba2a-355b06b70d4a S5P_OFFL_L2__CO_____20230726T110309_20230726T1... ... MULTIPOLYGON (((23.10957 -81.50548, 37.96240 -...
cf8155fd-6571-4329-bf8d-5fd6bdce04c6 S5P_OFFL_L2__CO_____20230726T124438_20230726T1... ... MULTIPOLYGON (((-2.27438 -81.52803, -0.00000 -...
75786eca-e6bc-4fed-8851-248d04c0c415 S5P_OFFL_L2__CO_____20230726T092139_20230726T1... ... MULTIPOLYGON (((48.47455 -81.50529, 63.32685 -...
dedf5549-a11c-437f-89df-611882ad5cea S5P_OFFL_L2__CO_____20230726T041710_20230726T0... ... MULTIPOLYGON (((124.66715 -81.43737, 139.40253...
a5fa8405-a046-4ae6-a7a4-866241bf9119 S5P_OFFL_L2__CO_____20230726T023541_20230726T0... ... MULTIPOLYGON (((150.04360 -81.44411, 164.78845...
2afde945-a5e3-4962-a783-f91e7faf9867 S5P_OFFL_L2__CO_____20230726T211206_20230726T2... ... MULTIPOLYGON (((-129.24042 -81.60899, -114.202...
3181387b-a3db-411c-ad1e-fbc52b5077bc S5P_OFFL_L2__CO_____20230726T193036_20230726T2... ... MULTIPOLYGON (((-103.84656 -81.58432, -88.8523...
3e1cb690-28d3-4e6d-83f5-b4f2738b4eb7 S5P_OFFL_L2__CO_____20230726T174906_20230726T1... ... MULTIPOLYGON (((-78.43284 -81.55354, -63.49611...
289102e8-97a4-433c-ae13-8a51b12c690c S5P_OFFL_L2__CO_____20230726T005412_20230726T0... ... MULTIPOLYGON (((175.40688 -81.44910, 180.00000...
Everything works fine, except that one file seems to be missing. Searching with OpenAccessHub, there is a file ingested at 7am, which does not show up on the API request. This is the dataset covering India and so I get sort of a hole in my average dataset there. This is not only true for the given day 2023-07-26 but for every day I investigated, from mid of may this year to mid of july this year. Can anyone explain?
Edit: When I copy the id of the missing datset(s) and download it via api.download("0f75554c-f2c4-478e-aed7-876f28c3ceb5") the file gets downloaded. But since I want my script to work "independently" of user inputs, this solution is not suitable for my case.