I am trying to achieve something like this using Vision
and ARKit
, so my idea is to get landmark points from Vision
and deploy node using those points. I am using this demo as a reference. To date, I have been able to find the landmark points of the face using Vision
. Now to use those points in ARKit
to add nodes to the scene. I am unable to get the depth, which is essential for the node's position.
After searching SO, I found this post to convert CGPoint
to SCNVector3
, but here I am having an issue as I don't having any reference plane which is can use to get depth by hit testing against.
So, how can I get the perfect depth using CGPoint
s, other than using hitTest
, or is there any other way I can achieve those result shown in video.
Here is the code which is implemented
CGPoint faceRectCenter = (CGPoint){
CGRectGetMidX(faceRect),CGRectGetMidY(faceRect)
}; // faceRect is detected face bounding box
__block NSMutableArray<ARHitTestResult* >* testResults = [NSMutableArray new];
void(^hitTest)(void) = ^{
NSArray<ARHitTestResult* >* hitTestResults = [self.sceneView hitTest:faceRectCenter types:ARHitTestResultTypeFeaturePoint];
if(hitTestResults.count > 0){
//get the first
ARHitTestResult* firstResult = nil;
for (ARHitTestResult* result in hitTestResults) {
if (result.distance > 0.10) {
firstResult = result;
[testResults addObject:firstResult];
break;
}
}
}
};
for(int i=0; i<3; i++){
hitTest();
}
if(testResults.count > 0){
NSLog(@"%@", testResults);
SCNVector3 postion = averagePostion([testResults copy]);
NSLog(@"<%.1f,%.1f,%.1f>",postion.x,postion.y,postion.z);
__block SCNNode* textNode = [ARTextNode nodeWithText:name Position:postion];
SCNVector3 plane = [self.sceneView projectPoint:textNode.position];
float projectedDepth = plane.z;
NSLog(@"projectedDepth: %f",projectedDepth);
dispatch_async(dispatch_get_main_queue(), ^{
[self.sceneView.scene.rootNode addChildNode:textNode];
[textNode show];
});
}
else{
// NSLog(@"HitTest invalid");
}
}
Any help will be great!!