I would do a rectangular texture from it.
You will need 2 x 2D textures/arrays one for r,g,b
color summation avg and one for count cnt. Also I am not convinced that I would use OpenGL/GLSL for that it seems to me that C/C++ will be better for this.
I would do it like this:
- blank the destination textures (
avg[][]=0, cnt[][]=0
)
obtain satellite position/direction,time
from position and direction create transformation matrix which projects Earth the same ways as on photo. Then from time determine rotation shift.
do loop through entire Earths surface
just two nested loops a
- rotation and `b - distance from equator.
get x,y,z
from a,b
and transform matrix + rotation shift (a
-axis)
also can do it backwards a,b,z = f(x,y)
but it is more tricky but faster and more accurate. You can also interpolate x,y,z
between neighboring (pixels/areas)[a][b]
add pixel
if x,y,z
is on the front side (z>0
or z<0
depends on the camera Z
direction) then
avg[a][b]+=image[x][y]; cnt[a][b]++;
end of nested loop from point #3.
- goto #2 with next photo
do loop through entire avg
texture to restore average color
if (cnt[a][b]) avg[a][b]/=cnt[a][b];
[Notes]
can test if the copied pixel is:
Obtained during day or night (use only what you want and not mix both together!!!) also can determine clouds (i think gray/white-ish colors not snow) and ignore them.
do not overflow the colors
can use 3 separate textures r[][],g[][],b[][]
instead avg
to avoid that
can ignore areas near edges of Earth to avoid distortions
can apply lighting corrections
from time
and a,b
coordinates to normalize illumination
Hope it helps ...
[Edit1] orthogonal projection
so its clear here is what I mean by orthogonal projection:

this is used texture (cant find nothing better suited and free on the web) and wanted to use real satellite image not some rendered ...

this is my orthogonal projection App
- the red,green,blue lines are Earth coordinate system (
x,y,z
axis)
- the (red,green,blue)-white-ish lines are satellite projection coordinate system (
x,y,z
axis)
the point is to to convert earth vertex coordinates (vx,vy,vz)
to satellite coordinates (x,y,z)
if z >= 0
then its the valid vertex for processed texture so compute texture coordinates directly from x,y
without any perspective (orthogonally).
for example tx=0.5*(+x+1);
... if x
was scaled to <-1,+1>
and usable texture is tx <0,1>
The same goes for y
axis: ty=0.5*(-y+1);
... if y
was scaled to <-1,+1>
and usable texture is ty <0,1>
(my camera has inverted y
coordinate system respective to texture matrix therefore the inverted sign on y
axis)
if the z < 0
then you are processing vertex out of texture range so ignore it ...
as you can see on the image the outer boundaries of texture are distorted so you should use only the inside (for example 70% of earth image area) also you can do some kind of texture coordinates correction dependent on the distance from texture middle point. When you have this done then just merge all satellite image projection to one image and that is all.
[Edit2] Well I played with it a little and found out this:
- reverse projection correction do not work for my texture at all I think that is possible it is post processed image ...
middle point distance based correction seems be nice but the scale coefficient used is odd have no clue why to multiply by 6 when it should be 4 I think ...
tx=0.5*(+(asin(x)*6.0/M_PI)+1);
ty=0.5*(-(asin(y)*6.0/M_PI)+1);

- corrected nonlinear projection (by asin)
- corrected nonlinear projection edge zoom
- distortions are much much smaller then without
asin
texture coordinate corrections