1

Since the following expansion for the logarithm holds:

log(1-x)=-x-x^2/2-x^3/3-...

one can calculate the following functions which have removable singularities at x:

log(1-x)/x=-1-x/2-...

(log(1-x)/x+1)/x=-1/2-x/3-...

((log(1-x)/x+1)/x+1/2)/x=-1/3-x/4-...

I am trying to use NumPy for these calculations, and specifically the log1p function, which is accurate near x=0. However, convergence for the aforementioned functions is still problematic.

Do you have any ideas for any existing functions implementing these formulas or should I write one myself using the previous expansions, which will not be as efficient, however?

amonk
  • 1,769
  • 2
  • 18
  • 27
whirlwind
  • 11
  • 1
  • 1
    Define problematic. What problem specifically are you having? – ev-br Apr 24 '18 at 06:08
  • The value for example for x=1.0e-8 is very large, nowhere near the theoretical limit, especially for higher order functions of the family above. – whirlwind Apr 24 '18 at 06:34
  • @whirlwind Please provide numeric examples and code: which value is very large, what is the theoretical limit? How exactly do you calculate them? – MB-F Apr 24 '18 at 06:53
  • 1
    @whirlwind "infinite" series sum means "infinite" rounding errors on floating point. to compute `log` [other methods](https://stackoverflow.com/a/42108287/2521214) are used on computers like binary search. If you insist on the summing than you need to handle the precision like this: [see integration precision chapter](https://stackoverflow.com/a/28020934/2521214) – Spektre Apr 24 '18 at 06:57
  • @Spektre Thanks for the suggestions – whirlwind Apr 24 '18 at 08:09
  • @whirlwind also you should sum the smallest values first (in reverse order) or even in pyramid order (divide and conquer) so you always add similar magnitude values. In forward order you have bigger values first and that screws your result latter on once binary exponent is too far as you got just 53 bits of mantissa in 64bit double – Spektre Apr 24 '18 at 08:40
  • What is your question ?? I assume they are about the last three functions mentioned. What do you mean by "convergence" of the functions ? At what x do you want to evaluate ? –  Jun 16 '21 at 08:42
  • Presumably, you mean removable singularities at 0 ? –  Jun 16 '21 at 08:43

3 Answers3

3

The simplest thing to do is something like

In [17]: def logf(x, eps=1e-6):
    ...:     if abs(x) < eps:
    ...:         return -0.5 - x/3.
    ...:     else:
    ...:         return (1. + log1p(-x)/x)/x

and play a bit with the threshold eps.

If you want a numpy-like, vectorized solution, replace an if with a np.where

>>> np.where(x > eps, 1. + log1p(-x)/x) / x, -0.5 - x/3.)
ev-br
  • 24,968
  • 9
  • 65
  • 78
-1

Why not successively take the Square of the candidate, after initially extracting the exponent component? When the square results in a number greater than 2, divide by two, and set the bit in the mantissa of your result that corresponds to the iteration. This is a much quicker and simpler way of determining log base 2, which can then in a single multiplication, be transformed to the e or 10 base.

-1

Some predefined functions don't work at singularity points. One simple-minded solution is to compute the series by adding terms from a peculiar sequence.

For your example, the sequence would be :

sum = 0
for i in range(n):
  sum+= x^k/k
sum = -sum

for log(1-x)

Then you keep adding a lot of terms or until the last term is under a small threshold.

L Maxime
  • 82
  • 1
  • 8