Typically shadow mapping is done by comparing the window-space Z coordinate (this is what a depth texture stores) of your current fragment vs. your light. This must be done using a common reference orientation, so that involves re-projecting your current fragment's position from the perspective of your light.
You have the view-space position right now, which is relative to your current camera and not particularly useful. To do this effectively you want world-space position. You can get that if you transform the view-space position by the inverse view matrix.
Given world-space position, transform into clip-space from light's perspective:
// This will be in clip-space
vec4 lightSpacePos = lightProjectionMatrix * lightViewMatrix * vec4 (worldPos);
// Transform it into NDC-space by dividing by w
lightSpacePos /= lightSpacePos.w;
// Range is now [-1.0, 1.0], but you need [0.0, 1.0]
lightSpacePos = lightSpacePos * vec4 (0.5) + vec4 (0.5);
Assuming default depth range, lightSpacePos
is now ready for use. xy
contains the texture coordinates to sample from your shadow map and z
contains the depth to use for comparison.
For a more thorough explanation, see the following answer.
Incidentally, you will want to eliminate your position texture from your G-Buffer to achieve reasonable performance. It is very easy to reconstruct world- or view-space position given only the depth and the projection and view matrices and the arithmetic involved is much quicker than an extra texture fetch. Storing an additional texture with adequate precision to represent position in 3D space will burn through tons of memory bandwidth each frame and is completely unnecessary.
This article from the OpenGL Wiki explains how to do this. You can take it one step farther and work back to world-space, which is more desirable than view-space. You may need to tweak your depth buffer a little bit to get adequate precision, but it will still be quicker than storing position separately.