Ok, I've implemented the karplus strong algorithm in C. It's a simple algorithm to simulate a plucked string sound. You start with a ring buffer of length n (n = sampling freq/freq you want), pass it through a simple two point average filter y[n] = (x[n] + x[n-1])/2, output it, and then feed it back into the delay line. Rinse and repeat. This smooths out the noise over time to create a natural plucked string sound.
But I noticed that with an integer delay line length, several high pitches could be matched to the same delay length. Also, the integer delay length doesn't allow for smoothly varying pitches (like in vibrato or glissando) I've read several papers on the extensions to the karplus algorithm, and they all talk about using either an interpolated delay line for fractional delay or an all pass filter
http://quod.lib.umich.edu/cgi/p/pod/dod-idx?c=icmc;idno=bbp2372.1997.068
http://www.jaffe.com/Jaffe-Smith-Extensions-CMJ-1983.pdf
http://www.music.mcgill.ca/~gary/courses/projects/618_2009/NickDonaldson/index.html
I've implemented interpolated delay lines before, but only on wave tables where the waveform buffer doesn't change. I just step through the delay at different rates. But what confuses me is that when it comes to the KS algorithm, the papers seem to be talking about actually changing the delay length instead of just the rate I'm stepping through it. The ks algorithm complicates things because I'm supposed to be constantly feeding values back into the delay line.
So how would I go about implementing this? Do I feed the interpolated value back in or what? Do I get rid of the two point averaging low pass filter completely?
And how would the all pass filter work? Am I supposed to replace the 2 point averaging filter with the all pass filter? How would I glide between distant pitches with glissando using the linear interpolation method or allpass filter method?