4

We have 5 geo-stationary satellites, spaced around the equator (not equally spaced, but almost) taking photos of Earth every day. The output of each photo is - surprise! - a photo of a sphere, taken from a long distance away.

I need to reassemble those photos into a single texture-mapped sphere, and I'm not sure how best to do this. Key problems:

  1. The photos are - obviously - massively distorted the further you go from the center, since they're looking at a sphere
  2. There are many hundreds of "sets" of 5 photos, taken at different times of day. Any solution needs to be programmatic - I can't just do this by hand :(
  3. Output platform is the iPad3: Open GL ES 2, textures up to 4096x4096 - but not as powerful as a desktop GPU. I'm not great with shaders (although I've done a lot of OpenGL pre-shaders)
  4. The photos themselves are high-res, and I'm not sure I can have all 5 textures loaded simultaneously. I've also got a very high-res texture loaded for the planet surface (underneath the satellite photos).

I've already got: a single rectangular texture mapped onto a sphere (my sphere is a standard mesh wrapped into a sphere, with vertices distributed evenly across the surface), so ... I tried converting 5 photos of spheres into a single rectangular map (but no success so far; although someone pointed me at doing a "polar sin warp" which looks like it might work better).

I've also thought of doing something funky with making a cube-map out of the 5 photos, and being clever about deciding which of the photos to read for a given pixel, but I'm not entirely convinced.

Is there a better way? Something I've overlooked? Or has anyone got a concrete way of achieving the above?

Adam
  • 32,900
  • 16
  • 126
  • 153

1 Answers1

0

I would do a rectangular texture from it.

You will need 2 x 2D textures/arrays one for r,g,b color summation avg and one for count cnt. Also I am not convinced that I would use OpenGL/GLSL for that it seems to me that C/C++ will be better for this.

I would do it like this:

  1. blank the destination textures (avg[][]=0, cnt[][]=0)
  2. obtain satellite position/direction,time

    from position and direction create transformation matrix which projects Earth the same ways as on photo. Then from time determine rotation shift.

  3. do loop through entire Earths surface

    just two nested loops a - rotation and `b - distance from equator.

  4. get x,y,z from a,b and transform matrix + rotation shift (a-axis)

    also can do it backwards a,b,z = f(x,y) but it is more tricky but faster and more accurate. You can also interpolate x,y,z between neighboring (pixels/areas)[a][b]

  5. add pixel

    if x,y,z is on the front side (z>0 or z<0 depends on the camera Z direction) then

    avg[a][b]+=image[x][y]; cnt[a][b]++;
    
  6. end of nested loop from point #3.

  7. goto #2 with next photo
  8. do loop through entire avg texture to restore average color

    if (cnt[a][b]) avg[a][b]/=cnt[a][b];
    

[Notes]

  1. can test if the copied pixel is:

    Obtained during day or night (use only what you want and not mix both together!!!) also can determine clouds (i think gray/white-ish colors not snow) and ignore them.

  2. do not overflow the colors

    can use 3 separate textures r[][],g[][],b[][] instead avg to avoid that

  3. can ignore areas near edges of Earth to avoid distortions

  4. can apply lighting corrections

    from time and a,b coordinates to normalize illumination

Hope it helps ...

[Edit1] orthogonal projection

so its clear here is what I mean by orthogonal projection:

Satellite photo texture (EUMETSAT)

this is used texture (cant find nothing better suited and free on the web) and wanted to use real satellite image not some rendered ...

orthogonal projection

this is my orthogonal projection App

  • the red,green,blue lines are Earth coordinate system (x,y,z axis)
  • the (red,green,blue)-white-ish lines are satellite projection coordinate system (x,y,z axis)

the point is to to convert earth vertex coordinates (vx,vy,vz) to satellite coordinates (x,y,z) if z >= 0 then its the valid vertex for processed texture so compute texture coordinates directly from x,y without any perspective (orthogonally).

for example tx=0.5*(+x+1); ... if x was scaled to <-1,+1> and usable texture is tx <0,1> The same goes for y axis: ty=0.5*(-y+1); ... if y was scaled to <-1,+1> and usable texture is ty <0,1> (my camera has inverted y coordinate system respective to texture matrix therefore the inverted sign on y axis)

if the z < 0 then you are processing vertex out of texture range so ignore it ... as you can see on the image the outer boundaries of texture are distorted so you should use only the inside (for example 70% of earth image area) also you can do some kind of texture coordinates correction dependent on the distance from texture middle point. When you have this done then just merge all satellite image projection to one image and that is all.

[Edit2] Well I played with it a little and found out this:

  • reverse projection correction do not work for my texture at all I think that is possible it is post processed image ...
  • middle point distance based correction seems be nice but the scale coefficient used is odd have no clue why to multiply by 6 when it should be 4 I think ...

    tx=0.5*(+(asin(x)*6.0/M_PI)+1); 
    ty=0.5*(-(asin(y)*6.0/M_PI)+1); 
    

corrected nonlinear projection

  • corrected nonlinear projection (by asin)

corrected nonlinear projection edge zoom

  • corrected nonlinear projection edge zoom
  • distortions are much much smaller then without asin texture coordinate corrections
Spektre
  • 49,595
  • 11
  • 110
  • 380
  • i.e. a brute-force "sample it, de-project it, and interpolate/blur the results into a new rectangular texture" ? Seems like it should work fine, although I'd prefer to do something that could run online, since e.g. Apple/Imagination won't allow us to generate PVR textures at runtime :( – Adam Jan 08 '14 at 19:40
  • what about create bmp/tga/jpg/png texture as file and then convert it to PVR texture with command-line tool (something like texturetool) from your app and then load it from file? or that is also not allowed on runtime? btw why cannot you use non PVR texture (it will be faster in this case). also you can do all transformation as rendering via GLSL but from what you write you cannot change the input data during runtime so your app is fixed to the same input images also which seem silly (so you can do the texture in desktop and copy the resulting texture to ipad anyway ...) – Spektre Jan 09 '14 at 08:19
  • PVR command line tool requires a minimum of FOUR GIGABYTES of RAM, so ... that's not going to run on any device you can buy today (even assuming Imagination shipped a version for iOS - I don't believe they do). Also how could it be "faster" ? – Adam Jan 09 '14 at 16:10
  • Well if you want use PVR then you need to compress before and then decompress image during rendering. if you create texture during rendering then the compress/decompress process is redundant. On the other hand if you gfx HW handles PVR natively then you right it would not be. – Spektre Jan 09 '14 at 19:29
  • If you want to use just simple rendering without new texture generation then just project all of your satelite images onto blank sphere but orthogonally not perspective !!! You can do it by multi pass render each pass for one photo if you are worry about memory issues. And may be not a sphere but ellipsoid. If you share some images I would try to do it when i have the time ... :) (do not need the full resolution) – Spektre Jan 09 '14 at 19:33
  • All PVR chips handle PVR natively. All iOS devices use PVR chips. The question is tagged "iOS" :) – Adam Jan 10 '14 at 12:22
  • re: "project onto blank sphere" - sure, but roughly how would you do this? I can project geometry orthogonally --- but how do you project a TEXTURE orthogonally? – Adam Jan 10 '14 at 12:23
  • easy by projecting texture coordinates orthogonaly. of course scaled to texture size. draw sphere and for every vertex compute relevant texture x,y (if it is in range). without GLSL it will be relatively slow but i doubt that you have too many triangles on sphere so do not worry about it – Spektre Jan 10 '14 at 17:08
  • well i edited my answer so you actualy see what I mean by orthogonal projection ... – Spektre Jan 12 '14 at 12:23
  • Thanks! that helps a lot. It's many times easier than I was imagining - I was just overcomplicating it far too much :(. I'm a fool :). – Adam Jan 12 '14 at 16:24