I am working with some data that another person recorded using the OpenNI recorder module. Unfortunately, they accidentally set the mirrored capability as on during their recording, so I am having a few problems 1. mirroring the depth using MirrorCap and 2. aligning the depth with the rgb using AlternateViewPointCap. I tried accessing these capabilities from my depth node as follows:
xn::Context ni_context;
xn::Player player;
xn::DepthGenerator g_depth;
xn::ImageGenerator g_image;
ni_context.Init();
ni_context.OpenFileRecording(oni_filename, player);
ni_context.FindExistingNode(XN_NODE_TYPE_DEPTH, g_depth);
ni_context.FindExistingNode(XN_NODE_TYPE_IMAGE, g_image);
g_depth.GetMirrorCap().SetMirror(false);
g_depth.GetAlternativeViewPointCap().SetViewPoint(g_image);
However, this did not work. Even after I set the mirror to false, the IsMirrored() command on g_depth still returns as true, and the alternateviewpointcap is not changing the depthmap I receive from the generator.
I also tried doing it through a mock node:
xn::MockDepthGenerator m_depth;
m_depth.CreateBasedOn(g_depth);
m_depth.GetMirrorCap().SetMirror(false);
m_depth.GetAlternativeViewPointCap().SetViewPoint(g_image);
xn::DepthMetaData temp;
g_depth.GetMetaData(temp);
m_depth.SetMetaData(temp);
This also does not affect the depth map I get from m_depth. I'd appreciate any and all suggestions for how to make my color and depth information align, NO MATTER HOW HACKY. This data is difficult to record and I need to use it one way or another.
My current solution is to create the mock depth node and flip all of the pixels using my own routine, before setting it with the SetMetaData function. Then, I use OpenCV to create a perspective transform from the RGB image to the depth image by having a user click 4 points. I then apply this transform to the rgb frame to make the values line up. It's not perfect but it works - however, for the sake of other people who might need to use the data, I want to make a more proper fix.