-1

the goal of the project is to create a drawing app. i want it so that when i touch the screen and move my finger it will follow the finger and leave a cyan color paint. i did created it BUT there is one problem. the paint DEPTH is always randomly placed.

here is the code, just need to connect the sceneView with the storyboard. https://github.com/javaplanet17/test/blob/master/drawingar

my question is how do i make the program so that the depth will always be consistent, by consistent i mean there is always distance between the paint and the camera.

if you run the code above you will see that i have printed out all the SCNMatrix4, but i none of them is the DEPTH.

i have tried to change hitTransform.m43 but it only messes up the x and y.

adsad
  • 1
  • 1

1 Answers1

4

If you want to get a point some consistent distance in front of the camera, you don’t want a hit test. A hit test finds the real world surface in front of the camera — unless your camera is pointed at a wall that’s perfectly parallel to the device screen, you’re always going to get a range of different distances.

If you want a point some distance in front of the camera, you need to get the camera’s position/orientation and apply a translation (your preferred distance) to that. Then to place SceneKit content there, use the resulting matrix to set the transform of a SceneKit node.

The easiest way to do this is to stick to SIMD vector/matrix types throughout rather than converting between those and SCN types. SceneKit adds a bunch of new accessors in iOS 11 so you can use SIMD types directly.

There’s at least a couple of ways to go about this, depending on what result you want.

Option 1

// set up z translation for 20 cm in front of whatever
// last column of a 4x4 transform matrix is translation vector
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.2

// get camera transform the ARKit way
let cameraTransform = view.session.currentFrame.camera.transform
// if we wanted, we could go the SceneKit way instead; result is the same
// let cameraTransform = view.pointOfView.simdTransform

// set node transform by multiplying matrices
node.simdTransform = cameraTransform * translation

This option, using a whole transform matrix, not only puts the node a consistent distance in front of your camera, it also orients it to point the same direction as your camera.

Option 2

// distance vector for 20 cm in front of whatever
let translation = float3(x: 0, y: 0, z: -0.2)

// treat distance vector as in camera space, convert to world space
let worldTranslation = view.pointOfView.simdConvertPosition(translation, to: nil)

// set node position (not whole transform)
node.simdPosition = worldTranslation

This option sets only the position of the node, leaving its orientation unchanged. For example, if you place a bunch of cubes this way while moving the camera, they’ll all be lined up facing the same direction, whereas with option 1 they’d all be in different directions.

Going beyond

Both of the options above are based only on the 3D transform of the camera — they don’t take the position of a 2D touch on the screen into account.

If you want to do that, too, you’ve got more work cut out for you — essentially what you’re doing is hit testing touches not against the world, but against a virtual plane that’s always parallel to the camera and a certain distance away. That plane is a cross section of the camera projection frustum, so its size depends on what fixed distance from the camera you place it at. A point on the screen projects to a point on that virtual plane, with its position on the plane scaling proportional to the distance from the camera (like in the below sketch):

awesome napkin sketch of screen plane projection

So, to map touches onto that virtual plane, there are a couple of approaches to consider. (Not giving code for these because it’s not code I can write without testing, and I’m in an Xcode-free environment right now.)

  1. Make an invisible SCNPlane that’s a child of the view’s pointOfView node, parallel to the local xy-plane and some fixed z distance in front. Use SceneKit hitTest (not ARKit hit test!) to map touches to that plane, and use the worldCoordinates of the hit test result to position the SceneKit nodes you drop into your scene.

  2. Use Option 1 or Option 2 above to find a point some fixed distance in front of the camera (or a whole translation matrix oriented to match the camera, translated some distance in front). Use SceneKit’s projectPoint method to find the normalized depth value Z for that point, then call unprojectPoint with your 2D touch location and that same Z value to get the 3D position of the touch location with your camera distance. (For extra code/pointers, see my similar technique in this answer.)

rickster
  • 124,678
  • 26
  • 272
  • 326
  • sorry for late reply, i try up the make invisible SCNPlane option, and then test it by creating a ball wherever i touch the screen, but the longer i touch the screen the ball are quickly getting closer and closer to the screen. here is the new code. https://github.com/javaplanet17/test/blob/master/drawingar2 – adsad Nov 02 '17 at 23:45