4

I am trying to create lomo fisheye effect on an image using openGL.

Should I use cube mapping and fisheye projection? Is there any open source that I can refer to?

genpfault
  • 51,148
  • 11
  • 85
  • 139
0pcl
  • 1,154
  • 4
  • 17
  • 22
  • 2
    [Fisheye Quake](http://strlen.com/gfxengine/fisheyequake/), a modification of GLQuake, should be of interest to you. Comes with source. – daxim Dec 17 '09 at 13:14
  • Thanks! I will take a look on that. I did a bit research on openCV after I posted this question, is it possible to simulate fisheye by openCV instead? I saw there's a camera calibration function available, it helps to correct radial distortion like fisheye. – 0pcl Dec 18 '09 at 09:13
  • I think OpenCV is just going to be more work than learning and doing it directly. – Daniel Yankowsky Dec 21 '09 at 20:06

2 Answers2

5

You can draw a single quad with the image textured onto it, and use a fragment shader to warp the texture coordinate per-pixel as you desire. You'll have to do all the math yourself, but it looks like the previous post here might be a good starting point.

Asher Dunn
  • 2,384
  • 16
  • 12
1

Taking the question title beyond effect on an image to producing "true" fisheye views, ie. usable field of view 180+ degrees...

There are two slightly different methods to adapt existing pipelines to fisheye view (with "simple" OpenGL). Both require scene rendering for up to 6 times - for each side of the "box" that will be projected to a flat screen. Each side / surface must be square - or should be, depends on the method - and likely smaller than the original full viewport.

The number of sides required depends on how wide the fisheye field of view is requested. In a typical FPS, for a FOV of 130 - three sides is enough. For a FOV up to 220 - five sides.

Method 1 - cubemap texture (GL_ARB_texture_cube_map)

  1. init once, for a specific FOV, pre-calculate a translation table of 2d on-screen coordinates to cubemap texture 3d coordinates, 16x16 grid for the whole screen should be enough
  2. setup the viewport and position camera accordingly to render box sides, do the usual rendering
  3. bind sides to the GL_TEXTURE_CUBE_MAP_ARB texture
  4. iterate over screen emitting (rectangular) GL_QUAD_STRIP-s using the translation table and the cubemap.

Method 2 - 2d or rectangular textures (GL_NV_texture_rectangle)

  1. init once, for a specific FOV, pre-calculate a "ray" table of texture + 2d texture coordinates to 2d screen coordinates
  2. as in Method 1, setup the viewport and position camera accordingly to render box sides, do the usual rendering
  3. bind sides to the GL_TEXTURE_RECTANGLE_NV or GL_TEXTURE_2D textures
  4. iterate over textures emitting (trapezoid) GL_QUAD_STRIP-s on the screen using the "ray" table.

The Method 1 is simpler and delivers better results.

Gotchas:

  • set cubemap texture wrapping to GL_CLAMP_TO_EDGE
  • in a typical FPS, player view is not only pitch and yaw, but also roll - calculate camera orientation for each side via proper rotation
  • if the render loop is combined with progress / physics / ai, this repeated scene re-rendering may confuse existing internals.

This is all of course depends on the specifics of a particular engine. I'm not sure how well this applies to OpenGL 3.3+ core profile yet the idea should be the same.

It is possible to draw the world to fisheye in one pass by doing fisheye transformaion in vertex shader. But! it requires original geometry sufficiently (pre-)tessellated. Or, it could be possible to employ Geometry Shader and/or Tessellation Shaders to organize tessellation / transform feedback on the GPU. The later should be built into the rendered from the ground up, perhaps.

For a well-isolated example using Quake 1 engine see Fisheye and Panorama OpenGL FPS and this diff specifically. Unfortunatelly, the vertex shader example is lost.

Arkadi Shishlov
  • 351
  • 2
  • 5