0

I'm in the process of creating a coverflow view for android, but I'm running into a little bit of a roadblock when handling clicks on the view. I'm having trouble determining which square is beneath the click. In order to determine which square is clicked, the (X,Y) data from the click event is taken and converted into the opengl view space. The trouble comes when placing this on a square: the transposition, rotation, and compares can all be done in java, but this seems like a huge waste, since all these operations are being done when rendering the view.

So, my question is: How can I extract the 2d view-mapped object coordinates during the opengl render process?

I've gotten my brain in a bit of a twist trying to figure out what is actually in opengl's model and projection matrices, and I can't seem to find any specification of what the numbers in the matrix actually mean.

Thanks!

Vatsu1
  • 154
  • 2
  • 11

2 Answers2

2

You can use gluUnProject() and gluProject() for converting between object and window space and save yourself some math. Docs are here and here.

Geobits
  • 22,218
  • 6
  • 59
  • 103
  • Does OpenGL ES provide `glu` ? Seems not according to http://stackoverflow.com/questions/7589563/can-i-use-glu-with-android-ndk – rotoglup Jul 02 '12 at 21:14
  • Well, either way, android does, at least for java. NDK, maybe not: http://developer.android.com/reference/android/opengl/GLU.html – Geobits Jul 02 '12 at 21:18
1

There is another technique that involves rendering one frame where all the primitives in the scene are rendered in different colors. Then you can test the pixel color under the touch event. There are tricks that you can play with this technique like shrinking the viewport so that it is only a small area around the touch and rendering that to a texture so the user never sees it.

CaseyB
  • 24,780
  • 14
  • 77
  • 112