I'm currently experimenting with opencv on the iOS platform. The idea is to capture a video feed and then look for an image within that feed. I am correctly detecting the image and would like to use the homography between the known image and the image on screen to transform a CALayer which contains a frame for the image (so that the frame is drawn around the image).
I've calculated a homography between two images using findHomography. I've verified that this homography is valid since using perspectiveTransform to draw a border around an image into a cv::Mat gives me perfect alignment under scaling, translating and rotating the camera in 3D. I would like to use the homography to transform a CALayer (Core Animation on iOS) to draw a HUD over the captures image. This would be used to mark the located image in the camera feed.
I've tried using Tommy's answer from Converting OpenCV's findHomography perspective matrix to iOS' CATransform3D but this did not work. No image was shown on the screen.
Im using a custom CALayer subclass which simply draws a rectangle in its frame. Ive tested this without a transform and it works. When I apply the generated CATransformation3D however, nothing is drawn to the screen.
I've tried to debug this by using Tom's answer from How to get rotation, translation, shear from a 3x3 Homography matrix in c# to calculate the homographys stats but these seem weird: they give rotation in increments of pi/2 and scales of enormous magnitude. Al the while, perspectiveTransform seems to use the found homography matrix perfectly.
Is there some change to opencv 3.0 which made the answers from the above 2 questions stop working? Any help would be much appreciated!