So, I have a .tif image 7000 x 6000 pixels. I have read all the 4 corner's real LAT,LON coordinates, upperLeft, upperRight, lowerRight, lowerLeft using gdal python package. I use python code that anywhere I point the mouse inside the .tif image it shows me the x,y coordinates that represent the mouse coordinates which are pixels (x,y). It starts counting in relation to upperLeft corner that is (0,0 - the mouse pointer coordinates, not the real LAT, LON). I am thinking of taking the upperLeft LAT, LON (let's say LAT = 100000 and LON = 100000) and add something to find the pointer's real coordinates, but I cannot add real coordinates + mouse pointer coordinate pixels...I think you get the rationale... I have been trying many days to fix it without any success! My question is how can I translate the mouse pointer (x,y) coordinates to LAT, LON real coordinates, so I can add them and see the end result of LAT, LON that mouse shows in the .tif image?
Update (15/2/2021): From this link (Find Latitude/Longitude Coordinates of Every Pixel in a GeoTiff Image) that you have posted I took this part of code:
def pixel2coord(img_path, x, y):
"""
Returns latitude/longitude coordinates from pixel x, y coords
Keyword Args:
img_path: Text, path to tif image
x: Pixel x coordinates. For example, if numpy array, this is the column index
y: Pixel y coordinates. For example, if numpy array, this is the row index
"""
# Open tif file
ds = gdal.Open(img_path)
old_cs = osr.SpatialReference()
old_cs.ImportFromWkt(ds.GetProjectionRef())
# create the new coordinate system
# In this case, we'll use WGS 84
# This is necessary becuase Planet Imagery is default in UTM (Zone 15). So we want to convert to latitude/longitude
wgs84_wkt = """
GEOGCS["WGS 84",
DATUM["WGS_1984",
SPHEROID["WGS 84",6378137,298.257223563,
AUTHORITY["EPSG","7030"]],
AUTHORITY["EPSG","6326"]],
PRIMEM["Greenwich",0,
AUTHORITY["EPSG","8901"]],
UNIT["degree",0.01745329251994328,
AUTHORITY["EPSG","9122"]],
AUTHORITY["EPSG","4326"]]"""
new_cs = osr.SpatialReference()
new_cs.ImportFromWkt(wgs84_wkt)
# create a transform object to convert between coordinate systems
transform = osr.CoordinateTransformation(old_cs,new_cs)
gt = ds.GetGeoTransform()
# GDAL affine transform parameters, According to gdal documentation xoff/yoff are image left corner, a/e are pixel wight/height and b/d is rotation and is zero if image is north up.
xoff, a, b, yoff, d, e = gt
xp = a * x + b * y + xoff
yp = d * x + e * y + yoff
lat_lon = transform.TransformPoint(xp, yp)
xp = lat_lon[0]
yp = lat_lon[1]
return (xp, yp)
and I try to combine it with this part of code, that shows pixel coordinates of the image on a PyQt application:
class MyToolBar(mpl_qt.NavigationToolbar2QT):
def set_message(self, s):
try:
sstr = s.split()
while len(sstr) > 5:
del sstr[0]
x, y = float(sstr[0][2:]), float(sstr[1][2:])
s = f'x = {x:.2f}\ny = {y:.2f}'
except Exception:
pass
if self.coordinates:
self.locLabel.setText(s)
like this:
...
x, y = float(sstr[0][2:]), float(sstr[1][2:])
xReturned, yReturned = pixel2coord('/Desktop/myImage.tif',x,y)
s = f'x = {xReturned:.2f}\ny = {yReturned :.2f}'
...
But it does not work... I am trying to print real coordinates, not pixel coordinates and all them to be related to the different .tif that I import everytime! Any idea that is wrong with the code I show above ??