Yes, you can do this with texture memory, and it is fast. I personally use ArrayFire to accomplish these kinds of operations, because it is faster than I can hope to code by hand.
If you want to code by hand yourself in CUDA, something like this is what you want:
// outside kernel
texture<float,1> A;
cudaChannelFormatDesc desc = cudaCreateChannelDesc<float>();
cudaArray *arr = NULL;
cudaError_t e = cudaMallocArray(&arr, &desc, 1, length);
A.filterMode = cudaFilterModePoint;
A.addressMode[0] = cudaAddressModeClamp;
cudaBindTextureToArray(A, arr, desc);
...
// inside kernel
valA = tex1D(A,1,idx)
valB = tex1D(B,1,idx)
float f = 0.5;
output = (f)*valA + (1-f)*valB;
If you want to just plug-in ArrayFire (which in my experience is faster than what I try to code by hand, not to mention way simpler to use), then you'll want:
// in arrayfire
array A = randu(10,1);
array B = randu(10,1);
float f = 0.5;
array C = (f)*A + (1-f)*B;
The above assumes you want to interpolate between corresponding indices of 2 different arrays or matrices. There are other interpolation functions available too.