I am building ARKit
application for iPhone. I need to detect specific perfume bottle and display content depending on what is detected. I used demo app from developer.apple.com to scan real world object and export .arobject
file which I can use in assets. It's working fine, although since bottle is from glass detection is very poor. It detects only in location where scan was made in range from 2-30 seconds or doesn't detect at all. Merging of scans doesn't improve situation, something making it even worse. Merged result may have weird orientation.
What can I do to solve this?
If nothing, will CoreML
help me? I can make a lot of photos and teach model. What if I'll check each frame for match with this model? Does such approach have any chance?