16

My goal is to overlay material/texture on a physical object (it would be an architectural model) that I would have an identical 3d model of. The model would be static (on a table if that helps), but I obviously want to look at the object from any side. The footprint area of my physical models would tend to be no smaller than 15x15cm and could be as large as 2-3m^2, but I would be willing to change the size of the model to work with ARCore's capability.

I know ARCore is mainly designed to anchor digital objects to flat horizontal planes. My main question is, in its current state, is it capable of accompliahing my end goal? If i have this right, it would record physical point cloud data and attempt to match it to point cloud data of my digital model, then overlapping the two on the phone screen?

If that really isn't what ARCore is for, is there an alternative that I should be focusing on? In my head this sounded fairly straightforward, but I'm sure I'll get way out of my depth if I go about it an inefficient way. Speaking of depth, I would prefer not to use a depth sensor, since my target devices are phones.

Andy Jazz
  • 49,178
  • 17
  • 136
  • 220
SZwinsor
  • 161
  • 1
  • 1
  • 4

2 Answers2

9

I most definitely hope that it will be possible in the future - after all an AR toolkit without Computer Vision is not that helpful.

Unfortunately, according to the ARCore employee Ian, this is currently not directly supported but you could try to access the pixels via glReadPixels and then use OpenCV with these image bytes.

Quote from Ian:

I can't speak to future plans, but I agree that it's a desirable capability. Unfortunately, my understanding is that current Android platform limitations prevent providing a single buffer that can be used as both a GPU texture and CPU-accessible image, so care must be taken in providing that capability.

PhilLab
  • 4,777
  • 1
  • 25
  • 77
  • Yeah I don't expect it to be supported until we can have basic CNNS running on mobile. However as you mentioned OpenCV: You could stream this to a laptop with a beefy gpu locally and get results that way with some fiddling. Although I recommend tensorflow, or perhaps CNTK - apparently 3x more efficient. – Jamie Nicholl-Shelley Sep 09 '18 at 08:27
7

Updated: 11th May, 2023.

Scene Semantics API

At the moment there's still no customizable 3D Object Recognition API in Google ARCore. However, a brand-new Scene Semantics API, which is a part of ARCore 1.37, allows you to automatically recognize eleven types of such outdoor scene components as:

  • MAIN COMPONENTSsky, building, tree, road, vehicle

  • MAJOR COMPONENTSsidewalk, terrain, structure, water

  • MINOR COMPONENTSobject, person

Scene Semantics runs an ML model on the camera image feed and provides a semantic image with each pixel corresponding to one of 11 labels of outdoor concepts.

In addition to the above, you can use ML Kit framework and Augmented Images API for various tasks. And, according to Google's documentation, you can use ARCore as input for ML models.

Andy Jazz
  • 49,178
  • 17
  • 136
  • 220
  • Feature request : 3D Object Detection, issue no:418 - https://github.com/google-ar/arcore-android-sdk/issues/418 – LEGEND MORTAL Oct 31 '20 at 11:08
  • Sorry @LEGENDMORTAL, Do you mean workaround for that? I know that workarounds always exist, but a question is `Is ARCore object recognition possible?`. – Andy Jazz Oct 31 '20 at 11:36
  • 1
    I think its possible using Vuforia and ARCore, Ref: https://library.vuforia.com/content/vuforia-library/en/articles/Solution/arcore-with-vuforia.html and https://sigma.software/about/media/ar-experimenting-vuforia-object-recognition-android – LEGEND MORTAL Oct 31 '20 at 11:58