TL;DR: The problem comes from numerical instabilities.
First of all, here is a simplified code on which the exact same problem appear (with different values):
x = np.arange(0, 50, 0.1)
plt.plot(np.sin(x) - np.sinh(x) - np.cos(x) + np.cosh(x))
plt.show()
Here is another example where the problem does not appear:
x = np.arange(0, 50, 0.1)
plt.plot((np.sin(x) - np.cos(x)) + (np.cosh(x) - np.sinh(x)))
plt.show()
While the two code are mathematically equivalent with real numbers, they are not equivalent because of fixed-size floating-point precision. Indeed, np.sinh(x)
and np.cosh(x)
result both in huge values when x
is big compared to np.sin(x)
and np.cos(x)
. Unfortunately, when two fixed-size floating-point numbers are added together, there is a loss of precision. The loss of precision can be huge (if not critical) when the order of magnitude of the added numbers are very different. For example, in Python and on a mainstream platform (so with IEEE-754 64-bit floating-point numbers), 0.1 + 1e20 == 1e20
is true due to the limited precision of the number representation. Thus (0.1 + 1e20) - 1e20 == 0.0
is also true, while 0.1 + (1e20 - 1e20) == 0.0
is not true (the resulting value is 0.1). The floating-point addition is neither associative nor commutative. In this specific case, the accuracy can reach a threshold where there is not significant number anymore in the result. For more information about floating-point precision, please read this post.
The point is you should be very careful when you subtract floating-point numbers. A good solution is to put parenthesis so that added/subtracted values have the same order of magnitude. Variable-sized and higher precision help a bit too. However, the best solution is to analyses the numerical stability of your algorithm. For example, studying condition number of the numerical operations used in your algorithm is a good start.
Here, a relatively good solution is just to use the second code instead of the first.