0

I am working on a program in C that graphs a rose curve as ASCII art. The program uses custom trig functions (Taylor series to be exact):

int factorial(int n) {
    int p = 1;
    if (n == 0) return 1;
    for (int i = 1; i < n; i ++){
        p *= i;
    }
    return p;
}

float sine(float x, int t) {
    float s = 0;
    for (int i = 0; i < t; i ++) {
        float n = (i%2==0 ? 1 : -1);
        int d = factorial(i*2);
        float m = pow(x, i*2);
        s += (n/d)*m;
    }
}

float cosine(float x, int t) {
    float s = 0;
    for (int i = 0; i < t; i ++) {
        float n = (i%2==0 ? 1 : -1);
        int d = factorial(i*2+1);
        float m = pow(x, i*2+1);
        s += (n/d)*m;
    }

The t parameter denotes the number of terms, which is always 10.

And to graph the rose I use:

void point(float r, float theta) {
    float fx = r*cosine(theta, TM);
    float fy = r*sine(theta, TM);
    int x = (int) round(fx)+50;
    int y = (int) round(fy)+50;
    grid[x][y] = 1;
}
...other code...
for (float theta = 0; theta < PI; theta += 0.5) {
    float r = SCALE*cosine(theta*CT, TM);
    point(r, theta);

The SCALE variable is the size of the rose and is defined at the top of the program. The TM variable is the number of terms in the Taylor series, and the CT variable is another parameter. It works fine using default C trig functions, but when I switch to mine it gives me a bus error, which says

make: *** [run] Bus error: 10

. This is on Mac OS X 10 by the way. The functions I wrote give me the correct values for a few numbers but just don't work here.

The Taylor series I'm using are here, and my implementation works in radians:

enter image description here

If you're wondering why I didn't just use the default trig functions, I'm doing this for fun and custom trig functions are part of what I want to do for this project. I'm also not the most experienced C programmer so keep that in mind.

  • 1
    Aside: I'd find `factorial` easier to read/understand if it worked down from `n`, like how `4!` is normally written expanded as `4 * 3 * 2 * 1`. `int product = 1; while(n > 1) { product *= n--; }` – mtraceur Apr 17 '22 at 14:31
  • Please paste in the exact error, and also what operating system you're running this on. "it gives me a bus error" isn't necessarily precise enough. Could be a `SIGBUS` signal on Linux for example, and then people answering could work from "well, what are possible causes for that signal on that system" and so on. – mtraceur Apr 17 '22 at 14:35
  • 1
    Sorry for the lack of detail; I'm running it on Mac OS X 10 and the error says "make *** [run] Bus error: 10" –  Apr 17 '22 at 14:36
  • Heads up, comments don't keep line breaks - even if you successfully paste them in they get replaced with spaces. For that reason and for the sake of other readers, I suggest editing the question body to include the exact error copy-paste. – mtraceur Apr 17 '22 at 14:40
  • 1
    Edit the question to provide a [mre]. – Eric Postpischil Apr 17 '22 at 14:42
  • 3
    Anytime I see a Taylor series expansion with a factorial function call in it I cringe. There are far better, more efficient, simpler ways to code this. – duffymo Apr 18 '22 at 12:57

3 Answers3

2

At least these problems

All warnings not enabled

OP's singular biggest mistake is not using tools at hand. Enable all warnings to quickly identify problems like:

warning: conversion from 'double' to 'float' may change value [-Wfloat-conversion]
       float m = pow(x, i*2);

warning: control reaches end of non-void function [-Wreturn-type]

int overflow

Code calls factorial(20) which is outside int range. Signed integer overflow is undefined behavior (UB).

Missing return value in sine()

@Spektre

Sine calculation wrong

Even with a small t and an added return s;, sine() is amiss.

Suggested re-write that does not readily overflow and avoids repetitive calculations by updating term each iteration based on the prior term.

float sine_alt(float x, int t) {
  float x2m = -x*x;  // - to alternate term's sign.
  float s = x;
  float term = x;
  for (int i = 1; i < t; i ++) {
    int i2 = i*2;
    term *= x2m/(i2 * (i2 + 1));
    s += term;
  }
  return s;
}

float vs. double

With float objects, usually one uses float math and not double math.

// pow(x, i*2+1);
powf(x, i*2+1);
chux - Reinstate Monica
  • 143,097
  • 13
  • 135
  • 256
0

Okay, so when I see a "bus error: 10" on MacOS, my first thought is that your program tried to access a memory address wrongly.

(Another common error that tells me roughly the same general gist is "segmentation fault/violation". There is nuance to which one means what on which system that you can look up in other questions.)

Now that we know that, we can significantly narrow down possible causes! It must be a place in the code where your memory accesses could be targeting the wrong memory.

Looking over the code you initially posted, the only place I can see where that might happen is the assignment to grid[x][y].

So my next thought is: how could x or y end up outside of your allocated grid space? I'm assuming grid has known fixed dimensions, and trusting your claim that the only thing you change to trigger the problem is swapping which trig functions you used.

Based on that, I would think maybe your trig functions are returning values which end up (due to rounding or approximation or whatever) causing x or y to be outside of the grid.

The simplest thing you could do to debug this further is to add a printf("x=%d y=%d\n", x, y); line just before the line where you have grid[x][y]. And then based on what that tells you, go from there. Add/move your printf statements elsewhere to narrow down when the values become wrong and so on.

Also, I missed this earlier, thanks to a comment for catching it, but the functions you posted are missing any return statement. If this is true in your code, and you don't have your stuff configured to error out in that case, that could definitely cause you to see obviously or subtly wrong results anywhere between all and none of the time. (To learn more about all the ways it could possibly go wrong, some search terms you'll want are "calling convention" and "ABI".)

(You may want to also take the time to get familiar with how to run running your program live under a debugger, so that you can see variable values changing as the code steps through each instruction, but that's a separate detour onto another learning curve. For example, if the lack of a return statement is your problem, and you know what to look for in a debugger, like the return value being in a certain register when your function returns, you may notice an accidentally omitted return statement sooner.)

Also, based on this related question that StackOverflow recommended, it sounds like maybe Taylor series is just inherently be too inaccurate when you get too far from zero? Might be relevant here.

mtraceur
  • 3,254
  • 24
  • 33
  • The functions seemed to go outside of bounds so that looks correct. But I wonder why the trig functions aren't working? I used 8 terms in the series. The array is 100 by 100. –  Apr 17 '22 at 15:20
  • @Jacob well, couple things you can test to narrow down possibilities on that problem. First, you could raise the number of terms, maybe even by a large number (modern computers are very fast!). I like to double. So see if you stop having the problem at 16 terms, 32 terms, 64 terms, 128 terms, etc. If the problem doesn't goes away at more terms, you reason to think you might have implemented the series wrongly. If it does go away with more terms, maybe there is another source of error (such as how a binary floating point number can't represent values like 0.2 exactly). – mtraceur Apr 17 '22 at 15:33
  • @Jacob second, you could pick a specific example where you know it goes out-of-bounds, then run your `sine` or `cosine` function with print statements added at each iteration of the loop (or under a debugger and step through each iteration) to see the intermediate values of all the variables. This can help you see how/why/when a function's computation goes off the rails / diverges from what you expect. – mtraceur Apr 17 '22 at 15:36
  • @Jacob Also, I very strongly recommend taking the time and detour to learn how to take C code, compile it into a shared/dynamically-loadable library, run Python and load that library and call its C functions with Python's `ctypes` library, and then do "property-based testing"/"fuzzed unit testing" on it with `hypothesis` (and a Python test runner like `pytest`, obviously). I'm sure there are ways to do this without Python, but that is the best experience I've personally found for easily doing very thorough testing of C code. – mtraceur Apr 17 '22 at 15:48
  • 2
    @Jacob the functions does not work because you forget to add `return s;` at their end ... see my answer for more info – Spektre Apr 18 '22 at 08:00
  • @Spektre great catch! Thanks for spotting that. I was assuming I didn't need to distrust basic stuff like that (too much good faith I guess... I thought surely someone would either test their functions enough or carefully read over their code enough to catch stuff like that). – mtraceur Apr 19 '22 at 16:34
0
  1. your goniometrics computation is wasting too much CPU power

    see C++ function to approximate sine using taylor series expansion

    You are computing the same stuff again and again, You mix int and float causing slow conversions ...

  2. how big is TM ?

    you know factorial overflows pretty quick and on floats you can not get better precision than 5 decimals no matter what TM is anyway so I would not dare to go above 15 ...

  3. you are not returning float value in sine and cosine

    That is most likely corrupting STACK and causing your error as the rest of the code is expecting it but I do not code on your platform so I might be wrong ... Better compilers throw a warning on this ...

    And of coarse without return your sin and cos simply does not work as even if you compute it it will not propagate the result to rest of your code. Instead you most likely got gibberish from STACK causing seemingly random numbers or even INF or NAN ...

So simply repair your functions like this (hope I port it to C from the link above correctly):

float sine(float x,int n=15)
    {
    int i;
    float y=0;      // result
    float x2=x*x;   // x^2
    float xi=x;     // x^i
    float ii=1;     // i!
    if (n>0) for(i=1;;)
        {
        y+=xi/ii; xi*=x2; i++; ii*=i; i++; ii*=i; n--; if (!n) break;
        y-=xi/ii; xi*=x2; i++; ii*=i; i++; ii*=i; n--; if (!n) break;
        }
    return y;
    }

As you can see no need for expensive operations like modulo and no calls to sub-functions nor repetitive computing of the same stuff in each iterations like you do ... Do not forget to repair your cosine too

Spektre
  • 49,595
  • 11
  • 110
  • 380
  • It's a probably not the stack on a modern CPU, most ABIs nowadays use registers to pass return values. Although I guess with optimizations turned off the stack could be getting involved too, and even could be getting involved in a way that corrupts the stack. Overall great point about the return value though. – mtraceur Apr 19 '22 at 16:42
  • I have some critique of "expensive operations like modulo and no calls to sub-functions nor repetitive computing of the same stuff in each iterations like you do" because any decent modern compiler can handle a lot of that automatically nowadays with optimizations turned on, and there are many many advantages to not manually micro-optimizing until after you've proven with measurements *in your specific case* that the compiler can't optimize to an acceptable level. – mtraceur Apr 19 '22 at 16:48
  • For example, a compile-time constant `% 2` is trivially automatically optimized out to a `& 1` on any C implementation where those are actually equivalent (all of them I think). Similarly, a decent compiler nowadays can probably notice when a loop is compile-time deducible to alternate behavior every {some small number} iterations and unroll the loop as appropriate - and if it can't now, you can expect that it will learn to eventually, because such automated optimization algorithms are huge wins across the board for basically all of programming. – mtraceur Apr 19 '22 at 16:56
  • Basically, it *is* good to teach people when a way of computing something is very inefficient, but it's equally important to not teach people to make their code harder to understand and optimize (both for humans and automatic tools) by manually optimizing it for the current situation in ways that obfuscates the big-picture intent. Look at Duff's Device, or the XOR swap. *Optimized code ages unoptimized; naive clear code ages optimized* (assuming continued progress in automatic optimizations, and there's still much much more progress left to make in that field). – mtraceur Apr 19 '22 at 17:09
  • @mtraceur "% 2 is trivially automatically optimized out to a & 1" --> when it is `some_unsigned % 2`, yes. With `some_signed_int %2`, the result is 1,0,-1. `& 1` is not the same. – chux - Reinstate Monica Apr 19 '22 at 17:10
  • Returning to some positives: great catch on how factorial overflows fast enough that a large number of iterations might not be viable! I should've thought about that when suggesting testing raising iterations in a comment on my answer. – mtraceur Apr 19 '22 at 17:11
  • Anyway despite my critiques, I +1'd this answer, because I think it *is* helpful and touches on several points which ideally we would all consider. – mtraceur Apr 19 '22 at 17:13
  • 1
    @chux-ReinstateMonica yes and no. It is often trivial to prove that it is the same in practice, especially if all the code is in the same translation unit (or whole-program / "link time" optimization is being used and is good enough), and compilers treating cases which hit undefined behavior as license to disregard those cases often makes it even more trivial. But it's very fair that it's no longer *as locally* or *as trivially* provable: in this case, the proof is that `i` starts at `0`, only ever goes up, and the optimization is free to be wrong upon overflow since that's undefined behavior. – mtraceur Apr 19 '22 at 17:27
  • (Aside: despite defending the real-world optimizability of `% 2` here, I personally for an alternating sign situation I wouldn't use `sign = iteration % 2 ? 1 : -1`, I would use `sign -= sign;`, because I think that much more clearly, explicitly, and directly communicates both the human semantics and the logic of flipping on each iteration.) – mtraceur Apr 19 '22 at 17:33
  • @mtraceur 1. `sign -= sign;` is also a good way (brunchless) as any condition is usually the biggest performance kill these days (but can be done even better without the need to multiply latter) ... 2. The compilers I am using use register return value only for inline asm functions returning int (as (E)AX on x86)... 3. STACK corruption occur not only on CPU side I had encounter FPU STACK problems due to the same missing return value in a float function on x86 architecture in the past in some occasion even FPU got stuck in some error mode until reset with `init` instruction from assembly... – Spektre Apr 19 '22 at 21:58