2

Recently I have been trying to implement dragging and scaling on a picture that I place in a FrameLayout. What I want to achieve is simple: to be able to drag the picture around and zoom it. I went to the Android Developer website and followed the guide there.

Then following the code examples on that website I wrote MyCustomView:

public class MyCustomView extends ImageView {
private static final int INVALID_POINTER_ID = 0xDEADBEEF;
private ScaleGestureDetector mScaleDetector;
private float mScaleFactor = 1.f;
private float mLastTouchX, mLastTouchY;
private int mActivePointerId = INVALID_POINTER_ID;
private LayoutParams mLayoutParams;
private int mPosX, mPosY;

public MyCustomView(Context context) {
    super(context);

    mScaleDetector = new ScaleGestureDetector(context, new CustomScaleListener());
    mLayoutParams = (LayoutParams) super.getLayoutParams();
    if (mLayoutParams != null) {
        mPosX = mLayoutParams.leftMargin;
        mPosY = mLayoutParams.topMargin;
    } else {
        mLayoutParams = new LayoutParams(300, 300);
        mLayoutParams.leftMargin = 0;
        mLayoutParams.topMargin = 0;
    }
}

@Override
public void onDraw(Canvas canvas) {
    super.onDraw(canvas);

    canvas.save();
    canvas.scale(mScaleFactor, mScaleFactor);

    canvas.restore();
}

@Override
public boolean onTouchEvent(MotionEvent ev) {
    // Let the ScaleGestureDetector inspect all events
    mScaleDetector.onTouchEvent(ev);

    final int action = MotionEventCompat.getActionMasked(ev);

    switch (action) {
        case MotionEvent.ACTION_DOWN: {
            final int pointerIndex = MotionEventCompat.getActionIndex(ev);

            //final float x = MotionEventCompat.getX(ev, pointerIndex);
            //final float y = MotionEventCompat.getY(ev, pointerIndex);
            final float x = ev.getRawX();
            final float y = ev.getRawY();

            // Remember where we started (for dragging)
            mLastTouchX = x;
            mLastTouchY = y;
            // Save the ID of this pointer (for dragging)
            mActivePointerId = MotionEventCompat.getPointerId(ev, 0);
            break;
        }

        case MotionEvent.ACTION_MOVE: {
            // Find the index of the active pointer and fetch its position
            final int pointerIndex = MotionEventCompat.findPointerIndex(ev, mActivePointerId);

            //final float x = MotionEventCompat.getX(ev, pointerIndex);
            //final float y = MotionEventCompat.getY(ev, pointerIndex);
            final float x = ev.getRawX();
            final float y = ev.getRawY();

            final float dx = x - mLastTouchX;
            final float dy = y - mLastTouchY;

            //TODO: Update the location of this view
            mPosX += dx;
            mPosY += dy;

            mLayoutParams.leftMargin += dx;
            mLayoutParams.topMargin += dy;
            super.setLayoutParams(mLayoutParams);

            invalidate();


            mLastTouchX = x;
            mLastTouchY = y;
            break;
        }

        case MotionEvent.ACTION_UP: {
            mActivePointerId = INVALID_POINTER_ID;
            break;
        }

        case MotionEvent.ACTION_CANCEL: {
            mActivePointerId = INVALID_POINTER_ID;
            break;
        }

        case MotionEvent.ACTION_POINTER_UP: {
            final int pointerIndex = MotionEventCompat.getActionIndex(ev);
            final int pointerID = MotionEventCompat.getPointerId(ev, pointerIndex);

            if (pointerID == mActivePointerId) {
                // This was our active pointer going up. Choose a new active pointer and
                // adjust accordingly
                final int newPointerIndex = pointerIndex == 0 ? 1 : 0;
                //mLastTouchX = MotionEventCompat.getX(ev, newPointerIndex);
                //mLastTouchY = MotionEventCompat.getY(ev, newPointerIndex);
                mLastTouchX = ev.getRawX();
                mLastTouchY = ev.getRawY();
                mActivePointerId = MotionEventCompat.getPointerId(ev, newPointerIndex);
            }

            break;
        }
    }
    return true;
}




private class CustomScaleListener extends ScaleGestureDetector.SimpleOnScaleGestureListener {
    @Override
    public boolean onScale(ScaleGestureDetector detector) {
        mScaleFactor *= detector.getScaleFactor();

        mScaleFactor = Math.max(0.1f, Math.min(mScaleFactor, 5.0f));

        invalidate();

        return true;
    }
}

In the MainActivity I simply instantiated a MyCustomView object and attached it to ViewGroup at the background, which is a FrameLayout. The xml file has nothing but a FrameLayout there.

    public class MainActivity extends AppCompatActivity {
    private ViewGroup layoutRoot;

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        layoutRoot = (ViewGroup) findViewById(R.id.view_root);

        final MyCustomView ivAndroid = new MyCustomView(this);
        ivAndroid.setImageResource(R.mipmap.ic_launcher);
        ivAndroid.setLayoutParams(new FrameLayout.LayoutParams(300, 300));
        layoutRoot.addView(ivAndroid);
    }
}

And here comes the problem that troubles me: The Android Developer website uses this to obtain the coordinates of the finger that touches the picture:

final float x = MotionEventCompat.getX(ev, pointerIndex);
final float y = MotionEventCompat.getY(ev, pointerIndex);

But it works horribly! The picture moves, but it does not follow my finger exactly, it always moves LESS than my finger does, and most importantly, it flashes.

So that is why you can see in MyCustomView that I have commented out this line and instead used this code:

final float x = ev.getRawX();
final float y = ev.getRawY();

While this time the picture moves smoothly in accordance with my finger, this change introduces a new problem. On the Android Developer website for dragging and scaling, there is a design principle that says:

In a drag (or scroll) operation, the app has to keep track of the original pointer (finger), even if additional fingers get placed on the screen. For example, imagine that while dragging the image around, the user places a second finger on the touch screen and lifts the first finger. If your app is just tracking individual pointers, it will regard the second pointer as the default and move the image to that location.

After I started using ev.getRawX() and ev.getRawY(), adding a second finger to the screen gives me exactly the problem stated above. But MotionEventCompat.getX(ev, pointerIndex) and MotionEventCompat.getY(ev, pointerIndex) does not.

Can somebody help me explain why it happens? I know that MotionEventCompat.getX(ev, pointerIndex) returns the coordinate after some sort of adjustment, and that ev.getRawX() returns the absolute coordinate. But I don't understand how exactly the adjustment works (Is there a formula or graphical explanation for it?). I also want to know why using MotionEventCompat.getX(...) would prevent the picture from jumping to the second finger on screen (after the first finger has been lifted).

Last but not least, the scaling code simply doesn't work AT ALL. If someone and teach me on that it will also be greatly appreciated!

Sairam
  • 169
  • 12
Thomas W Chen
  • 403
  • 1
  • 4
  • 12

1 Answers1

0

This question is long, so I will partionate it in smaller bits. Also, english is not my native language so I had some difficulties writting the answer. Comment if a part is not clear.

Can somebody help me explain why it happens?

getRawX() and ev.getRawY() will both give you the absolute pixel value of the event. Those will also (for the sake of backwards compatibility, when most screens could only track 1 "region" at a time) will always consider the finger as the first (and only) finger that is interacting with the device.

Then, came improvements that allowed to track the finger ID., the MotionEventCompat.getX(ev, pointerIndex) and MotionEventCompat.getY(ev, pointerIndex) functions allowed for further finesse when creating our onTouch() Listeners.

Is there a formula or graphical explanation for it?

Basically, you need to take into consideration the "Screen Density" of that device. Such as:

float SCREEN_DENSITY = getResources().getDisplayMetrics().density;

protected void updateFrame(FrameLayout frameLayout, int h, int w, int x, int y) {
    FrameLayout.LayoutParams params = new FrameLayout.LayoutParams(
            (int) ((w * SCREEN_DENSITY) + 0.5),
            (int) ((h * SCREEN_DENSITY) + 0.5)
    );
    params.leftMargin = (int) ((x * SCREEN_DENSITY) + 0.5);
    params.topMargin = (int) ((y * SCREEN_DENSITY) + 0.5);
    frameLayout.setLayoutParams(params);
}

I also want to know why using MotionEventCompat.getX(...) would prevent the picture from jumping to the second finger on screen (after the first finger has been lifted)

If you take into consideration that the first "finger" was lifted, then the new one, has a different "initial point", and different "history", because of that, it can send its event in relation to the movement made, not the final position on screen. This way it wont "jump to where the finger is" but will move according to the ammount of "x units" and "y units" traversed.

Last but not least, the scaling code simply doesn't work AT ALL. If someone and teach me on that it will also be greatly appreciated!

You are consuming the event (by returning true on your onTouch Listener), because of that, no other Listener can continue reading from the event, in a way that you can trigger more Listeners.

If you desire, move both functions (move and resize) inside the onTouch. My onTouch Listener has over 1700 lines of code (because it does a lot of stuff, including programatically creating Views and adding listeners to that), so I cant post it here, but basically:

1 Finger = move the frame. Get raw values, and use the "updateFrame"
2 Fingers = resize the frame. Get raw values, and use the "updateFrame"
3+ Fingers = Drop first finger, suppose 2 Fingers.
Bonatti
  • 2,778
  • 5
  • 23
  • 42
  • Hi thanks for the reply, I learned a lot! While I understand your proposed solution regarding 1,2, and 3+ fingers, I have little idea on how to implement it in my code... Also, should I actually use onTouch() or can I keep using onTouchEvent()? – Thomas W Chen Jul 12 '16 at 13:14
  • Overriding the `onTouchEvent()` is "creating a new onTouch() Listener Object" and using that Object instead, so, in practice, it makes no difference. I preffer the onTouch, since I can use multiple times the same object (I can have over 80+ views without a problem, and over 300, with some problem in most devices) all using the same onTouch Object, if I were to override each, I would drop to 120~ Views max in my Activity. If you are using less than 40 Views, it doesnt matter. Most devices today have 512 RAM left on an average basis, so memory management is not longer that necessary. – Bonatti Jul 12 '16 at 13:29
  • I tried using SCREEN_DENSITY on my image, but it flys out of the screen once I click on it. In your example you applied the LayoutParam to the FrameLayout, which I found confusing. Are you moving the parent that's holding all the pictures? Wouln't that move all the pictures altogether, instead of a single one? – Thomas W Chen Jul 12 '16 at 15:17
  • @ThomasWChen No, the LayoutParams are for the child view... Its a FrameLayout, that holds FrameLayout children in it. Its parameters change from the 0x and 0y of the parent, so you can move the parent (that will move all children inside with it), or you can move the children (that will only move inside the parent, I have simplified the code shown, but in my function x/y/h/w cannot be bigger than parent, or place a View that would not fit. Also, if your SCREEN_DENSITY is not moving proper to the screen, there are devices that "fake" the ["bucket"](https://design.google.com/devices/) they are in – Bonatti Jul 12 '16 at 16:19
  • This will do the trick. https://stackoverflow.com/questions/6578320/how-to-apply-zoom-drag-and-rotation-to-an-image-in-android/47861501#47861501 – user2288580 Dec 18 '17 at 03:14