I am solving a question from LeetCode.com:
Given an array of non-negative integers, you are initially positioned at the first index of the array. Each element in the array represents your maximum jump length at that position. Determine if you are able to reach the last index. For example: A = [2,3,1,1,4], return true. A = [3,2,1,0,4], return false.
One of the most voted solutions (here) says that the following is using a greedy approach:
bool canJump(int A[], int n) {
int last=n-1,i,j;
for(i=n-2;i>=0;i--){
if(i+A[i]>=last)last=i;
}
return last<=0;
}
I have two questions:
- What is the intuition behind using a greedy algorithm for this?
- How is the above solution a greedy algorithm?
I thought this to be solvable by Dynamic Programming. I understand that questions solvable by DP can be solved by greedy method, but what was the intuition behind this particular one that made it more sense to solve by the greedy approach?
This SO question highlights this difference to some extent. I understand this might be a bit more, but if possible, could some one please answer this question in this context? I would highly appreciate that.
Thank you.
Edit: I think one of the reasons of my confusion is over the input [3,3,1,0,4]
. As per the greedy paradigm, when i=0
wouldn't we take a jump of size 3
(A[0]
) in order to greedily reach the output? But doing this would in fact be incorrect.