2

I have written a simple matlab / octave function to create the sum of sinusoids with independent amplitude, frequency and phase for each component. Is there a cleaner way to write this?

## Create a sum of cosines with independent amplitude, frequency and
## phase for each component:
##   samples(t) = SUM(A[i] * sin(2 * pi * F[i] * t + Phi[i])
## Return samples as a column vector.
##
function signal = sum_of_cosines(A = [1.0], 
                                 F = [440], 
                                 Phi = [0.0], 
                                 duration = 1.0, 
                                 sampling_rate = 44100)
  t = (0:1/sampling_rate:(duration-1/sampling_rate));
  n = length(t);
  signal = sum(repmat(A, n, 1) .* cos(2*pi*t' * F + repmat(Phi, n, 1)), 2);
endfunction

In particular, the calls to repmat() seem a bit clunky -- is there some nifty vectorization technique waiting for me to learn?

fearless_fool
  • 33,645
  • 23
  • 135
  • 217
  • 1
    Can't the `repmat` of `A` and `sum` be replaced with a vector*Matrix multiply? – Ben Voigt Oct 09 '13 at 17:33
  • Can you clarify whether A, F, Phi are row or column vectors? – Ben Voigt Oct 09 '13 at 17:36
  • I only see one sinusoid here... Am I missing something? – Frederick Oct 09 '13 at 17:51
  • Assuming `A`, `F` and `Phi` will be row vectors, there is a solution using `bsxfun` and matrix multiplication, but it's not pretty either. – chappjc Oct 09 '13 at 18:32
  • 1
    The most elegant way I can think of is doing a discrete (inverse) fourier transform, and since your target sampling in the time domain is evenly spaced and finite I am sure it will work. What I'm also sure of is that I'm too tired right now to think on how to best interpolate the continuous frequencies from `F` into the discrete frequencies available in discrete fourier space. – mars Oct 09 '13 at 19:06
  • @BenVoigt: Assume that A, F, and Phi are row vectors (but I'm agnostic). @Frederick: The default arguments create a single sinusoid. But `sum_of_cosines([0.5, 0.5],[440, 660],[0,0])` would create two. @mars: You're probably right, but I've written this function to verify the output of an FFT. So I want to sum them 'by hand' in this case. – fearless_fool Oct 09 '13 at 21:00

3 Answers3

4

Is this the same?

signal = cos(2*pi*t' * F + repmat(Phi, n, 1)), 2) * A';

And then maybe

signal = real(exp(j*2*pi*t'*F) * (A .* exp(j*Phi))');

If you are memory constrained, this should work nicely:

e_jtheta = exp(j * 2 * pi * F / sampling_rate);
phasor = A .* exp(j*Phi);
samples = zeros(duration,1);
for k = 1:duration
    samples(k) = real((e_jtheta .^ k) * phasor');
end
Ben Voigt
  • 277,958
  • 43
  • 419
  • 720
  • Using Euler's identify is totally cute! I think you missed a sign on the phase term: `real(exp(j*2*pi*t'*F) * (A .* exp(-j*Phi))');` (right?) but otherwise, that's lovely. – fearless_fool Oct 09 '13 at 21:29
  • 1
    @fearless_fool: I think I have it right (in the sense of reproducing the original formula), but I'm not totally sure. Some people use positive phase for leading, some for lagging. Use whichever you like. In EE classes, we learned to express magnitude and phase as a complex number this way and called it "phasor analysis". – Ben Voigt Oct 09 '13 at 21:31
  • (I took [6.003](http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-003-signals-and-systems-fall-2011/index.htm), so +1 on phasor analysis). You're right -- phase could be interpreted as leading or lagging -- but I reversed the sign to make it reproduce the original formula. – fearless_fool Oct 09 '13 at 23:39
1

For row vectors A, F, and Phi, so you can use bsxfun to get rid of the repmat, but it is arguably uglier looking:

signal = cos(bsxfun(@plus, 2*pi*t' * F, Phi)) * A';
chappjc
  • 30,359
  • 6
  • 75
  • 132
  • 1
    Thanks for introducing me to bsxfun -- I can see where it would be useful, and I agree that it's not substantially better. Am I right in thinking that it's probably slower, since it has to call `@plus` iteratively? – fearless_fool Oct 09 '13 at 23:41
  • 1
    No, `bsxfun` is actually a fast replacement for `repmat`. Loren of MathWorks has a nice blog post on it: http://blogs.mathworks.com/loren/2008/08/04/comparing-repmat-and-bsxfun-performance/. Jonas also has a nice analysis of it here on SO: http://stackoverflow.com/questions/12951453/in-matlab-when-is-it-optimal-to-use-bsxfun/12955706#12955706 – chappjc Oct 09 '13 at 23:48
  • 1
    @fearless: If it were actually calling through a function handle, yes it would be much slower. But I'm pretty sure that bsxfun special-cases the primitive operators. – Ben Voigt Oct 10 '13 at 01:09
  • @Ben Voigt - I suspect so as well. – chappjc Oct 10 '13 at 01:13
0

Heh. When I call any of the above vectorized versions with length(A) = 10000, octave fills up VM and grinds to a halt (or at least, a slow crawl -- I haven't had the patience to wait for it to complete.

As a result, I've fallen back to the straightforward iterative version:

function signal = sum_of_cosines(A = [1.0], 
                                 F = [440], 
                                 Phi = [0.0], 
                                 duration = 1.0, 
                                 sampling_rate = 44100)
  t = (0:1/sampling_rate:(duration-1/sampling_rate));
  n = length(t);
  signal = zeros(n, 1);
  for i=1:length(A)
    samples += A(i) * cos(2*pi*t'*F(i) + Phi(i));
  endfor
endfunction

This version works plenty fast and teaches me a lesson about trying to be 'elegant'.

P.S.: This doesn't diminish my appreciation for the answers given by @BenVoigt and @chappjc -- I've learned something useful from both!

fearless_fool
  • 33,645
  • 23
  • 135
  • 217