90

I am working on a project which incorporates computing a sine wave as input for a control loop.

The sine wave has a frequency of 280 Hz, and the control loop runs every 30 µs and everything is written in C for an Arm Cortex-M7.

At the moment we are simply doing:

double time;
void control_loop() {
    time += 30e-6;
    double sine = sin(2 * M_PI * 280 * time);
    ...
}

Two problems/questions arise:

  1. When running for a long time, time becomes bigger. Suddenly there is a point where the computation time for the sine function increases drastically (see image). Why is this? How are these functions usually implemented? Is there a way to circumvent this (without noticeable precision loss) as speed is a huge factor for us? We are using sin from math.h (Arm GCC). sin function profiling
  2. How can I deal with time in general? When running for a long time, the variable time will inevitably reach the limits of double precision. Even using a counter time = counter++ * 30e-6; only improves this, but it does not solve it. As I am certainly not the first person who wants to generate a sine wave for a long time, there must be some ideas/papers/... on how to implement this fast and precise.
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
energetic
  • 897
  • 1
  • 5
  • 7
  • 1
    Comments are not for extended discussion; this conversation has been [moved to chat](https://chat.stackoverflow.com/rooms/238615/discussion-on-question-by-energetic-endless-sine-generation-in-c). – Machavity Oct 27 '21 at 17:56

12 Answers12

116

Instead of calculating sine as a function of time, maintain a sine/cosine pair and advance it through complex number multiplication. This doesn't require any trigonometric functions or lookup tables; only four multiplies and an occasional re-normalization:

static const double a = 2 * M_PI * 280 * 30e-6;
static const double dx = cos(a);
static const double dy = sin(a);
double x = 1, y = 0; // complex x + iy
int counter = 0;

void control_loop() {
    double xx = dx*x - dy*y;
    double yy = dx*y + dy*x;
    x = xx, y = yy;

    // renormalize once in a while, based on
    // https://www.gamedev.net/forums/topic.asp?topic_id=278849
    if((counter++ & 0xff) == 0) {
        double d = 1 - (x*x + y*y - 1)/2;
        x *= d, y *= d;
    }

    double sine = y; // this is your sine
}

The frequency can be adjusted, if needed, by recomputing dx, dy.

Additionally, all the operations here can be done, rather easily, in fixed point.


Rationality

As @user3386109 points out below (+1), the 280 * 30e-6 = 21 / 2500 is a rational number, thus the sine should loop around after 2500 samples exactly. We can combine this method with theirs by resetting our generator (x=1,y=0) every 2500 iterations (or 5000, or 10000, etc...). This would eliminate the need for renormalization, as well as get rid of any long-term phase inaccuracies.

(Technically any floating point number is a diadic rational. However 280 * 30e-6 doesn't have an exact representation in binary. Yet, by resetting the generator as suggested, we'll get an exactly periodic sine as intended.)


Explanation

Some requested an explanation down in the comments of why this works. The simplest explanation is to use the angle sum trigonometric identities:

xx = cos((n+1)*a) = cos(n*a)*cos(a) - sin(n*a)*sin(a) = x*dx - y*dy
yy = sin((n+1)*a) = sin(n*a)*cos(a) + cos(n*a)*sin(a) = y*dx + x*dy

and the correctness follows by induction.

This is essentially the De Moivre's formula if we view those sine/cosine pairs as complex numbers, in accordance to Euler's formula.

A more insightful way might be to look at it geometrically. Complex multiplication by exp(ia) is equivalent to rotation by a radians. Therefore, by repeatedly multiplying by dx + idy = exp(ia), we incrementally rotate our starting point 1 + 0i along the unit circle. The y coordinate, according to Euler's formula again, is the sine of the current phase.

Normalization

While the phase continues to advance with each iteration, the magnitude (aka norm) of x + iy drifts away from 1 due to round-off errors. However we're interested in generating a sine of amplitude 1, thus we need to normalize x + iy to compensate for numeric drift. The straight forward way is, of course, to divide it by its own norm:

double d = 1/sqrt(x*x + y*y);
x *= d, y *= d;

This requires a calculation of a reciprocal square root. Even though we normalize only once every X iterations, it'd still be cool to avoid it. Fortunately |x + iy| is already close to 1, thus we only need a slight correction to keep it at bay. Expanding the expression for d around 1 (first order Taylor approximation), we get the formula that's in the code:

d = 1 - (x*x + y*y - 1)/2

TODO: to fully understand the validity of this approximation one needs to prove that it compensates for round-off errors faster than they accumulate -- and thus get a bound on how often it needs to be applied.

Yakov Galka
  • 70,775
  • 16
  • 139
  • 220
  • The Cortex M7 has an optional FPU, so this might actually run fine on OP's MCU. – G. Sliepen Oct 26 '21 at 20:42
  • 6
    As a pedantic note, this will most likely not give *exactly* the frequency OP wants, due to roundoff errors, although it will be very close. (Then again, neither will the OP's code, nor most other simple solutions.) For most purposes the difference probably won't matter, or will be swamped by other sources of phase error such as clock drift. But if the OP wanted to, say, generate two sine waves from the same source, with one having a frequency that's a rational multiple of the other's and with absolutely no long term phase drift, then they might need some extra code to ensure this. – Ilmari Karonen Oct 27 '21 at 12:27
  • 2
    While this is clever, it is also a lot harder to understand (let alone debug) than a simple solution based on `sin` – Rob Audenaerde Oct 27 '21 at 15:49
  • 9
    @RobAudenaerde: nothing beyond basic understanding of complex numbers and taylor series. If you can calculate `sin` -- sure go and use it. This approach has educational value though because it demonstrates how to avoid trig functions (and square roots). As an exercise for future readers I recommend eliminating the cos/sin in `dx`/`dy` initialization, and explaining why it can be done. – Yakov Galka Oct 27 '21 at 16:11
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/238609/discussion-between-yakov-galka-and-krupip). – Yakov Galka Oct 27 '21 at 16:48
  • 3
    @IlmariKaronen This method works for that with a slight change, which is to calculate dx and dy using the least-common-multiple frequency, and then using the algorithm to step each frequency generator forward (say n steps for the first generator and m steps for the second each iteration). And, by my estimate (using Mathematica), the code in this answer loses only about 2x10^-13 radians per second, and the phase error through time is *much* less noisy than the plain sin implementation. – Kyle Miller Oct 27 '21 at 18:15
  • 7
    @RobAudenaerde: while we generally frown on tricky mathematical stuff like this on Stack Overflow, that is because typically questions are from the perspective of development for desktop or server machines, saying stuff like premature optimization is the root of all evil. Embedded devices are highly resource constrained and approaches like these are common place to those environments. – whatsisname Oct 28 '21 at 03:12
  • I love this answer! *calculation of a reciprocal square root* - https://en.wikipedia.org/wiki/Fast_inverse_square_root springs to mind. *... exercise for future readers I recommend eliminating the cos/sin in dx/dy initialization* - I'm guessing [small-angle approximations](https://en.wikipedia.org/wiki/Small-angle_approximation) might do the trick here... – Digital Trauma Oct 28 '21 at 20:05
  • 6
    `to fully understand the validity of this approximation one needs to prove that it compensates for round-off errors faster than they accumulate` Plotting the errors over time, I find that using double-precision floats, errors accumulate at a rate of 3.5E-10 per billion iterations. If I use the fix for norm drift described in the answer, the norm is stable. I can even relax the fix by only applying it every 2^24 iterations, and this reduces the error to a maximum of 6E-12, with the error dropping back to 0 each time in a sawtooth pattern. – Nick ODell Oct 28 '21 at 22:50
  • 1
    @NickODell: hmm, I thought it had only linear convergence; but it seems to be quadratic. So it's a better approximation than I anticipated..? I'm glad that you can verify that empirically. – Yakov Galka Oct 28 '21 at 23:14
  • This method works , to calculate dx and dy using the least-common-multiple frequency, and then using the algorithm to step each frequency generator forward . – Mr Light Nov 02 '21 at 11:55
  • 1
    This very much looks like it was based on the following answer, which I would encourage you to reference: https://dsp.stackexchange.com/a/1087/34576 Otherwise, note that this method is known as a recursive oscillator and has other previous literature associated with it (Vicanek, 2015, provides a nice summary of related methods, including this one). – sircolinton Jul 07 '22 at 23:57
  • 3
    @sircolinton Thank you for the link; however, I already linked to all sources I used when writing this answer. As I said earlier, this relies on basic math results that all math graduates would know, and it is not surprising that there exists previous literature covering this or similar methods. I'd appreciate not to be baselessly accused of plagiarism; rather given some credit for the original research and reinventing the wheel ;) – Yakov Galka Jul 08 '22 at 02:01
54

The function can be rewritten as

double n;
void control_loop() {
    n += 1;
    double sine = sin(2 * M_PI * 280 * 30e-6 * n);
    ...
}

That does exactly the same thing as the code in the question, with exactly the same problems. But it can now be simplified:

280 * 30e-6 = 280 * 30 / 1000000 = 21 / 2500 = 8.4e-3

Which means that when n reaches 2500, you've output exactly 21 cycles of the sine wave. Which means that you can set n back to 0. The resulting code is:

int n;
void control_loop() {
    n += 1;
    if (n == 2500)
        n = 0;
    double sine = sin(2 * M_PI * 8.4e-3 * n);
    ...
}

As long as your code can run for 21 cycles without problems, it'll run forever without problems.

user3386109
  • 34,287
  • 7
  • 49
  • 68
41

I'm rather shocked at the existing answers. The first problem you detect is easily solved, and the next problem magically disappears when you solve the first problem.

You need a basic understanding of math to see how it works. Recall, sin(x+2pi) is just sin(x), mathematically. The large increase in time you see happens when your sin(float) implementation switches to another algorithm, and you really want to avoid that.

Remember that float has only 6 significant digits. 100000.0f*M_PI+x uses those 6 digits for 100000.0f*M_PI, so there's nothing left for x.

So, the easiest solution is to keep track of x yourself. At t=0 you initialize x to 0.0f. Every 30 us, you increment x+= M_PI * 280 * 30e-06;. The time does not appear in this formula! Finally, if x>2*M_PI, you decrement x-=2*M_PI; (Since sin(x)==sin(x-2*pi)

You now have an x that stays nicely in the range 0 to 6.2834, where sin is fast and the 6 digits of precision are all useful.

MSalters
  • 173,980
  • 10
  • 155
  • 350
  • Yes, this is correct IMHO because it eliminates the time-component, which was useless anyway in OPs problem. – Rob Audenaerde Oct 27 '21 at 15:46
  • x will range to `6.28` (2π) (in the last sentence). This x is called "phase". – Zeus Oct 28 '21 at 06:55
  • @Zeus: Good catch. I mixed up -pi to +pi or 0 to 2pi. The former has one additional bit of precision (due to using the sign bit) but is slightly harder to explain. – MSalters Oct 28 '21 at 07:26
  • 4
    You are wasting the sign bit. You can decrement x when `x > M_PI` so it ranges from `-M_PI` to `M_PI`. Gains you an extra bit of precision. – Goswin von Brederlow Oct 29 '21 at 12:18
  • 9
    You are missing the fact that the errors in the calculation of `x` will accumulate this way. Thus, after a long enough time you will not be calculating `sin` at the "correct" value of `x`. (Of course, neither is the OP!) – Kapil Oct 29 '21 at 15:26
  • 2
    @Kapil: Exactly what do you mean by "accumulate"? Do you have reason to believe that the rounding errors are on average biased away from zero? And how does that differ from any of the other answers? Of course, `30e-06` itself has a numeric rounding error, but the physical clock hardware itself is likely more imprecise. Typical crystals have an 1E-5 error (5th digit), which is an order of magnitude worse. – MSalters Oct 29 '21 at 18:09
  • 2
    @MSalters The values of sin(2pi x) for x=n*280*30e-6 should take the same values for n=n_1 and n=n_2 if n_1-n_2 is divisible by 2500. I don't think your computation has this property. – Kapil Oct 30 '21 at 11:34
  • @Kapil: That's a pure mathematical approach, but in finite-precision math that's not as important. We care about error accumulation, and rounding errors are much less of an issue if the average out. Short-term you can always use `double` if 6 digits aren't enough; long-term even `double` isn't a solution if rounding errors add up. – MSalters Oct 30 '21 at 14:06
  • 1
    The thing is, some of the answers you are "shocked" by avoid error accumulation while being much faster than what you suggest. – Sven Marnach Nov 08 '21 at 09:45
  • @SvenMarnach: Perhaps you can explain how the error accumulates, despite the _average_ error term being exactly 0.0. It's not even a random walk; there's a fixed upper bound to the phase error. (about 1E-6) – MSalters Nov 08 '21 at 13:15
  • 1
    @MSalters while I agree with you that the OP most likely doesn't care about it being 'exact' as they're claiming to, your argument about errors cancelling out is incorrect. The problem is that you're adding the same value `M_PI * 280 * 30e-06` repeatedly, and that value in its binary representation is slightly off. So on average you are consistently drifting away in the same direction. (Same thing with my answer without @user3386109's approach because dx,dy are inexact.) – Yakov Galka Nov 12 '21 at 16:39
  • r = fmod(r,TwoPi); // range reduction for trigonometric function in radians – Adam Nov 18 '22 at 15:52
30

How to generate a lovely sine.

DAC is 12bits so you have only 4096 levels. It makes no sense to send more than 4096 samples per period. In real life you will need much less samples to generate a good quality waveform.

  1. Create C file with the lookup table (using your PC). Redirect the output to the file (https://helpdeskgeek.com/how-to/redirect-output-from-command-line-to-text-file/).
#define STEP   ((2*M_PI) / 4096.0)

int main(void)
{
    double alpha = 0;

    printf("#include <stdint.h>\nconst uint16_t sine[4096] = {\n");
    for(int x = 0; x < 4096 / 16; x++)
    {
        for(int y = 0; y < 16; y++)
        {
            printf("%d, ", (int)(4095 * (sin(alpha) + 1.0) / 2.0));
            alpha += STEP;
        }
        printf("\n");
    }
    printf("};\n");
}

https://godbolt.org/z/e899d98oW

  1. Configure the timer to trigger the overflow 4096*280=1146880 times per second. Set the timer to generate the DAC trigger event. For 180MHz timer clock it will not be precise and the frequency will be 279.906449045Hz. If you need better precision change the number of samples to match your timer frequency or/and change the timer clock frequency (H7 timers can run up to 480MHz)

  2. Configure DAC to use DMA and transfer the value from the lookup table created in the step 1 to the DAC on the trigger event.

  3. Enjoy beautiful sine wave using your oscilloscope. Note that your microcontroller core will not be loaded at all. You will have it for other tasks. If you want to change the period simple reconfigure the timer. You can do it as many times per second as you wish. To reconfigure the timer use timer DMA burst mode - which will reload PSC & ARR registers on the upddate event automatically not disturbing the generated waveform.

I know it is advanced STM32 programming and it will require register level programming. I use it to generate complex waveforms in our devices.

It is the correct way of doing it. No control loops, no calculations, no core load.

0___________
  • 60,014
  • 4
  • 34
  • 74
  • 6
    *“It makes no sense to send more than 4096 samples per period.”* This is not quite true if you send those samples at regular intervals. You can still get a bit of extra quality out of the output if you increase the sample rate so that the transitions between one level to the next are closer to their ideal point in time. Instead of tying the sample rate exactly to the DAC resolution, I would increase or decrease it as necessary so that you get the generated frequency closer to 280 Hz. That said, excellent answer to use a timer + DMA. – G. Sliepen Oct 26 '21 at 21:37
  • @G.Sliepen it will not help a lot. To have a smooth waveform capacitor & resistor forming a LP filter is needed. From my experience (designed plenty devices) 4096 samples are more than enough for this task). BTW I did write it in my answer. Change the number of samples or timer clock frequency :) – 0___________ Oct 26 '21 at 21:41
  • 1
    Thanks for the detailed answer. If I just want to output the sine on the DAC, that would be the perfect way to go. But unfortunately, in our case the sine is just the reference input of our system. Reference input and current position measurements then go into a state observer, followed by a controller which spits out the DAC values (which should make our controlled object follow the reference input). – energetic Oct 27 '21 at 15:41
  • @energetic depending on how you are talking to your controller, DMA may still work. e.g. have the I2C peripheral access the LUT via DMA in the same fashion as the DAC peripheral would. that said, if you can't find a way to make that work, you can always replace the DMA with a tiny function with an internal counter that steps through the LUT each time it's called. more CPU time, less dev time, that's the tradeoff. – Willa Oct 27 '21 at 18:06
  • You could maybe get by storing only a quarter of the lookup table, and depending on which quadrant you're in, invert the thing in either X or Y direction with a little subtraction. That's basically what the floating point trig functions in the Commodore 64 do – Lorraine Oct 28 '21 at 10:18
18

I'd like to address the embedded programming issues in your code directly - @0___________'s answer is the correct way to do this on a microcontroller and I won't retread the same ground.

  • Variables representing time should never be floating point. If your increment is not a power of two, errors will always accumulate. Even if it is, eventually your increment will be smaller than the smallest increment and the timer will stop. Always use integers for time. You can pick an integer size big enough to ignore roll over - an unsigned 32 bit integer representing milliseconds will take 50 days to roll over, while an unsigned 64 bit integer will take over 500 million years.
  • Generating any periodic signal where you do not care about the signal's phase does not require a time variable. Instead, you can keep an internal counter which resets to 0 at the end of a period. (When you use DMA with a look-up table, that's exactly what you're doing - the counter is the DMA controller's next-read pointer.)
  • Whenever you use a transcendental function such as sine in a microcontroller, your first thought should be "can I use a look-up table for this?" You don't have access to the luxury of a modern operating system optimally shuffling your load around on a 4 GHz+ multi-core processor. You're often dealing with a single thread that will stall waiting for your 200 MHz microcontroller to bring the FPU out of standby and perform the approximation algorithm. There is a significant cost to transcendental functions. There's a cost to LUTs too, but if you're hitting the function constantly, there's a good chance you'll like the tradeoffs of the LUT a lot better.
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Willa
  • 350
  • 3
  • 9
10

As noted in some of the comments, the time value is continually growing with time. This poses two problems:

  1. The sin function likely has to perform a modulus internally to get the internal value into a supported range.
  2. The resolution of time will become worse and worse as the value increases, due to adding on higher digits.

Making the following changes should improve the performance:

double time;
void control_loop() {
    time += 30.0e-6;
    if((1.0/280.0) < time)
    {
        time -= 1.0/280.0;
    }
    
    double sine = sin(2 * M_PI * 280 * time);
    ...
}

Note that once this change is made, you will no longer have a time variable.

Jonathon S.
  • 1,928
  • 1
  • 12
  • 18
  • Thanks for the input. This indeed improves the performance, but only postpones the problems to a later point. – energetic Oct 26 '21 at 20:35
  • 1
    @energetic it doesn't postpone the problem in that `time` will not grow without bound. `time` will always be between `[0, 1.0/280.0 + 30.0e-6)` but it effectively becomes a `phase` variable instead of a `time` variable – Joshua Voskamp Oct 26 '21 at 21:05
  • 3
    Yes, this is clear. I meant that with every substraction a small quantization error occurs which sums up over time. – energetic Oct 26 '21 at 21:26
  • @energetic are you accumulating the result outside of function `control_loop()`, for example in a simulation on a PC. If this is the case, the problem may be that the program is requiring more and more RAM to store/display the result. I’ve seen this problem when using LabView or Simulink the past. – Jonathon S. Oct 26 '21 at 21:52
10

Use a look-up table. Your comment in the discussion with Eugene Sh.:

A small deviation from the sine frequency (like 280.1Hz) would be ok.

In that case, with a control interval of 30 µs, if you have a table of 119 samples that you repeat over and over, you will get a sine wave of 280.112 Hz. Since you have a 12-bit DAC, you only need 119 * 2 = 238 bytes to store this if you would output it directly to the DAC. If you use it as input for further calculations like you mention in the comments, you can store it as float or double as desired. On an MCU with embedded static RAM, it only takes a few cycles at most to load from memory.

G. Sliepen
  • 7,637
  • 1
  • 15
  • 31
  • DMA, DMA - no cache. Cache is ignored by the DMA – 0___________ Oct 26 '21 at 21:31
  • @0___________ Sure, but my answer was assuming the sine wave would be generated by the periodically called `control_loop()` function, in which case cache might matter. – G. Sliepen Oct 26 '21 at 21:41
  • It is simply a lack of the OPs experience in the uC programming and an attempt to brute-force the solution. The idea makes no sense – 0___________ Oct 26 '21 at 21:43
  • 3
    @0___________ Yes but not everyone reading this question that is interesting in solving a similar problem might use the same MCU. Also, the timer+DMA approach might not work if you need all timers and/or DMA channels for other things. So the brute force way might be useful for some. – G. Sliepen Oct 26 '21 at 21:47
  • Why not have a lookup table for 21 periods? Wouldn't the sampling frequency fit perfectly? – ramzeek Oct 28 '21 at 22:50
  • @G.Sliepen, I'm suggesting 2500 samples, so instead of going from 0 to T your table would go from 0 to 21 T, where T is the period. – ramzeek Oct 29 '21 at 05:26
  • @wikikikitiki Ah, 2500 samples might take quite a bit of memory to store. It might be fine or not depending on the model of the MCU. If you have that much memory to spare, then sure. – G. Sliepen Oct 29 '21 at 05:30
9

If you have a few kilobytes of memory available, you can eliminate this problem completely with a lookup table.

With a sampling period of 30 µs, 2500 samples will have a total duration of 75 ms. This is exactly equal to the duration of 21 cycles at 280 Hz.

I haven't tested or compiled the following code, but it should at least demonstrate the approach:

double sin2500() {
  static double *table = NULL;
  static int n = 2499;
  if (!table) {
    table = malloc(2500 * sizeof(double));
    for (int i=0; i<2500; i++) table[i] = sin(2 * M_PI * 280 * i * 30e-06);
  }
  n = (n+1) % 2500;
  return table[n];
}
r3mainer
  • 23,981
  • 3
  • 51
  • 88
8

How about a variant of others' modulo-based concept:

int t = 0;
int divisor = 1000000;
void control_loop() {
    t += 30 * 280;
    if (t > divisor) t -= divisor;
    double sine = sin(2 * M_PI * t / (double)divisor));
    ...
}

It calculates the modulo in integer then causes no roundoff errors.

tshiono
  • 21,248
  • 2
  • 14
  • 22
8

There is an alternative approach to calculating a series of values of sine (and cosine) for angles that increase by some very small amount. It essentially devolves down to calculating the X and Y coordinates of a circle, and then dividing the Y value by some constant to produce the sine, and dividing the X value by the same constant to produce the cosine.

If you are content to generate a "very round ellipse", you can use a following hack, which is attributed to Marvin Minsky in the 1960s. It's much faster than calculating sines and cosines, although it introduces a very small error into the series. Here is an extract from the Hakmem Document, Item 149. The Minsky circle algorithm is outlined.


ITEM 149 (Minsky): CIRCLE ALGORITHM Here is an elegant way to draw almost circles on a point-plotting display:

NEW X = OLD X - epsilon * OLD Y
NEW Y = OLD Y + epsilon * NEW(!) X

This makes a very round ellipse centered at the origin with its size determined by the initial point. epsilon determines the angular velocity of the circulating point, and slightly affects the eccentricity. If epsilon is a power of 2, then we don't even need multiplication, let alone square roots, sines, and cosines! The "circle" will be perfectly stable because the points soon become periodic.

The circle algorithm was invented by mistake when I tried to save one register in a display hack! Ben Gurley had an amazing display hack using only about six or seven instructions, and it was a great wonder. But it was basically line-oriented. It occurred to me that it would be exciting to have curves, and I was trying to get a curve display hack with minimal instructions.


Here is a link to the hakmem: http://inwap.com/pdp10/hbaker/hakmem/hacks.html

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Walter Mitty
  • 18,205
  • 2
  • 28
  • 58
  • 5
    This is basically @YakovGalka's answer, but trading saving a register and a few instructions for lower accuracy. ARM processors have plenty of registers, so I don't think it's worth doing this, especially not if you only have about 119 samples per full wave. It's a great hack though! – G. Sliepen Oct 27 '21 at 16:06
  • 1
    One could also interpret this as the Leapfrog-Verlet method for `x''=-x`. – Lutz Lehmann Oct 28 '21 at 19:39
  • 2
    Yes, this is the same algorithm. Except that it has a bug. Except that the bug turns out to be a feature. That's neat! – Walter Mitty Oct 29 '21 at 11:13
3

I think it would be possible to use a modulo because sin() is periodic.

Then you don’t have to worry about the problems.

double time = 0;
long unsigned int timesteps = 0;
double sine;
void controll_loop()
{
  timesteps++;
  time += 30e-6;
  if( time > 1 )
  {
    time -= 1;
  }
  sine = sin( 2 * M_PI * 280 * time );
  ...
}
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Robin
  • 354
  • 1
  • 13
  • 1
    You can't use `%` on floating-point operands. You can call the `fmod` library function instead. However, most likely `sin()` already does this internally when needed, so that it only really needs to handle the case where the angle is in `[-pi,pi]`. (It probably checks whether the angle is already in that range and skips the fmod in that case, which would explain why that case is faster.) So using `fmod` is redundant and not likely to improve performance. – Nate Eldredge Oct 27 '21 at 00:01
  • But if the Performance drops it means that `sin()` cant handle big numbers or? – Robin Oct 27 '21 at 00:53
  • `sin` can handle big numbers; it handles them by doing `fmod` internally. If you call `fmod` first it will speed up `sin`, but the extra time taken in doing the `fmod` will be at least as much as you saved. – Nate Eldredge Oct 27 '21 at 00:55
  • 2
    I edited my answer to show you what I mean. The `double time` value doesnt grow big. So it doesnt get unprecise. – Robin Oct 27 '21 at 01:41
  • Function `fmod` is rather costly. As `time` grows slowly you could replace it with `if (time> 1) time -=1;` – tstanisl Oct 27 '21 at 07:06
  • Although you are on the right track, you should use a bignum library like GMP or libcrypto. They can perform these calculations without the loss of precision. BTW, since your are developing on Windows 7 that means you are either using x86 or x86-64. On these platforms, x87 is capable of performing extended precision (as per 754 standard) operations with 80 bits but I am unaware of compiler intrinsics that can give you that capability without resorting to assembly code. – Nadeem Taj Oct 28 '21 at 09:06
  • @NadeemTaj: Seems weird that you assert the platform is x86 when the question explicitly says ARM Cortex M. – Ben Voigt Oct 29 '21 at 20:48
2

Fascinating thread. Minsky's algorithm mentioned in Walter Mitty's answer reminded me of a method for drawing circles that was published in Electronics & Wireless World and that I kept. (Credit: https://www.electronicsworld.co.uk/magazines/). I'm attaching it here for interest.

However, for my own similar projects (for audio synthesis) I use a lookup table, with enough points that linear interpolation is accurate enough (do the math(s)!)

Circle drawing algorithm

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Tim V
  • 1,255
  • 1
  • 9
  • 5
  • 1
    This is an algorithm for calculating `y = sqrt(1 - x^2)` (i.e. x sampled uniformly) rather than `y = sin(t)` (i.e. angle is sampled uniformly). – Yakov Galka Nov 04 '21 at 20:31