3

I tried computing the real part of clog(a + i*b) using the following approach

Consider 'x' to be the complex number. x = a + i*b Let z be the complex log of x.

real(x) = 0.5 * log(a^2 + b^2)

This approach gives a huge error in terms of ULP for values between 0.5 and 1.0 especially.

I tried other approaches to avoid squaring of both the real and imaginary parts such as

Let t = b / a; real(x) = log(a) + 0.5 * log1p(t*t)

The error continued to persist with this approach as well. I understand that the error is likely from the squaring of a and b and hence I tried using fma() operations to get the error due to the squaring of 'a' and 'b'

Let a2 = a * a b2 = b * b

err_a2 = fma(a,a, -a2)

err_b2 = fma(b,b,-b2)

I then tried 0.5 * log(((err_a1 + err_b2) + a2) + b2) to get the real value of the complex log of x.

But the result is still inaccurate.

How can I compute log(sqrt(a^2 + b^2)) accurately (error within 2 ULP). I think I need to compute the square root of a^2 + b^2 in higher precision at a higher precision but I am not sure how to proceed from here.

MSalters
  • 173,980
  • 10
  • 155
  • 350
  • If you want a better precision that what the standard library offers, you could try to use a multi-precision library like [gmp](https://gmplib.org/) – Serge Ballesta Feb 01 '22 at 12:20
  • 1
    Code could use `double real_part = clog(x)`. – chux - Reinstate Monica Feb 01 '22 at 15:21
  • 2
    It seems to be a hard problem in general, and I don't know of any solution not involving calculating intermediate results at higher precision. The case `a = 0x1.fffffe0000010p-12`, `b = 0x1.fffffc0000040p-1` is an example of how bad things can get: the correctly rounded answer in this case is tiny, around `3.7e-37` (`0x1p-121`), but most algorithms have a hard time producing anything other than `0.0`, which is many billions of ULPs away. – Mark Dickinson Feb 01 '22 at 15:22
  • @MarkDickinson Thanks for posting a challenging case. I [almost](https://stackoverflow.com/a/70943599/2410359) made it. – chux - Reinstate Monica Feb 01 '22 at 16:55
  • Similar catastrophic cancellation occurs with z-1 around z=1. –  Feb 01 '22 at 19:44
  • 1
    Please explain how you observe the "huge error", for example with numerical cases. –  Feb 01 '22 at 19:50
  • 1
    For example, x = 0.70832094846615412 + i * 0.70576270714467548, the result using the approach discussed is -9.0225724105068384e-05 which off by around 886 ULPs. The correctly rounded result is -9.0225724105080397e-05 – Joseph Arnold Feb 04 '22 at 10:30
  • @JosephArnold "The correctly rounded result is -9.0225724105080_397e-05" --> `double` encoding typically allows for values of -9.0225724105080_392e-05 and the next nearest -9.0225724105080_405e-05, but not -9.0225724105080_397e-05. How is the suggested correct answer the correct _rounded_ one? – chux - Reinstate Monica Feb 05 '22 at 14:56

3 Answers3

5

sqrt(a^2 + b^2) is just std::hypot(a,b). With a bit of luck, that's already precise.

MSalters
  • 173,980
  • 10
  • 155
  • 350
5

... calculate the complex log of a double ...

Code could use double real_part = (double) clog(x).


To calculate the real part of a complex log of a double without using clog(x) near |x| == 1.0, consider using log1p()*1 to form a better precision result.

The core issue is |x| - 1.0 can suffer severe loss of precision and this is the first step in determining log().

0.5 * log(a^2 + b^2) is mathematically like 0.5 * logp1(a^2 + b^2 - 1). When |x| is near 1.0 and |a| > |b|, use 0.5 * logp1((a-1)*(a+1) + b^2). This subtracts the 1.0 from |a| exactly and retains precision with (a-1)*(a+1) + b^2. This differs from subtracting 1.0 from x as calculation of x has already lost important precession.

#include <complex.h>
#include <math.h>
#include <stdio.h>

#define root2 1.4142135623730950488016887242097

double clog_real(double a, double b) {
  double real_x;
  double h = hypot(a, b);
  // |x| near 1.0?
  if (h >= root2 / 2 && h < root2) {
    // Subtract 1 from the larger part
    if (fabs(a) > fabs(b)) {
      real_x = 0.5 * log1p((a - 1) * (a + 1) + b * b);
    } else {
      real_x = 0.5 * log1p((b - 1) * (b + 1) + a * a);
      // or (here and like-wise above IF you have a good fma())
      real_x = 0.5 * log1p(fma(a, a, (b - 1) * (b + 1)));
    }
  } else {
    real_x = log(h);
  }
  return real_x;
}

int main() {
  double a = 0x1.fffffe0000010p-12 * 2;
  double b = 0x1.fffffc0000040p-1;
  printf("%g %g\n", a, b);
  complex double c = a + csqrt(-1) * b;
  printf("%g\n", (double) clog(c));
  printf("%g\n", clog_real(a, b));
}

Output

0.000976562 1
3.57628e-07
3.57628e-07

Re: "I tried using fma() ..." --> Some fma() are low quality.


*1 The log1p functions compute the base-e (natural) logarithm of 1 plus the argument.

chux - Reinstate Monica
  • 143,097
  • 13
  • 135
  • 256
  • Assuming your `fma` is correct, is `(a - 1) * (a + 1)` ever better than `fma(a,a,-1.0)`? – chtz Feb 01 '22 at 18:30
  • As `|a|` is in the [.707 ... 1.414] range, `(a - 1) * (a + 1)` is certainly 2 or less ULP off. `fma(a,a,-1.0)` should be within 0.5 ULP but may be [worse](https://stackoverflow.com/a/42172145/2410359). – chux - Reinstate Monica Feb 01 '22 at 19:10
  • The approach still gives the result that are off by a few hundred ULPs for values close to 1. For example, x = 0.70832094846615412 i * 0.70576270714467548, the result is -9.0225724105068384e-05 which off by around 886 ULPs. The correctly rounded result is -9.0225724105080397e-05. – Joseph Arnold Feb 02 '22 at 08:22
  • @JosephArnold As mentioned in the answer, some `fma()` are of low quality. It may be that yours is too. Try again without `real_x = 0.5 * log1p(fma(a, a, (b - 1) * (b + 1)));`. With `clog_real(0.70576270714467548, 0.70832094846615412)`, I get -9.0225724105080392e-05 which is 0 ULP from -9.0225724105080397e-05 once the later number is converted to the closest `double`. BTW, what is the value of `FLT_EVAL_METHOD` on your machine? – chux - Reinstate Monica Feb 02 '22 at 12:39
  • @chux-ReinstateMonica FLT_EVAL_METHOD is 0 in my machine. Had already tried real_x = 0.5 * log1p(fma(a, a, (b - 1) * (b + 1))) but the accuracy did not improve. – Joseph Arnold Feb 02 '22 at 15:56
  • @JosephArnold On my machine `FLT_EVAL_METHOD==2`, Iikely `(b - 1) * (b + 1) + a * a` is taking advantage of `long double` math. To be clear, I was suggesting to _not_ use `0.5 * log1p(fma(a, a, (b - 1) * (b + 1)))`. Perhaps later I will try to force `double` math` and test. – chux - Reinstate Monica Feb 02 '22 at 17:32
  • I just looked up the implementation of [`clog`](https://sourceware.org/git/?p=glibc.git;a=blob;f=math/s_clog_template.c) in `libm`. I guess the most important difference to your solution is that (when necessary) they call a specialized [`__x2y2m1`](https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/ieee754/dbl-64/x2y2m1.c#l58) method to calculate `x*x+y*y - 1` with high precision, by splitting the products in halves and adding everything by split additions. – chtz Feb 04 '22 at 22:35
  • I tried something similar by using Dekker's multiplication algorithm where you can compute the error that arises in multiplying x and y. r1 + r2 = x * y where r2 is the error. I used the approach for squaring both x and y and then adding up the errors first and then then the squares of x and y. Need to investigate the GLIBC's method further. – Joseph Arnold Feb 05 '22 at 07:16
1

Have you tried log(sqrt(a*a + b*b)) ? Normally, the square root has the opposite effect of the square, so probably if you try to calculate it you will make the log of a better suited number.

Anyway, to calculate logarithms of numbers close to 1.0, you can probably calculate the derivatives of log(z + 1) for z == 0 and you will get a better approach, as the function is analitic in the circle of radius < 0.5 and so, you will get a good taylor approximation. This approximation is written below (thanks to Wolfram alpha)

log(1+x) ~= x - x^2/2 + x^3/3 - x^4/4 + ... (-1)^(n)*x^(n+1)/(n+1) + O(x^(n+2))

This is a series that converges absolutely on an open circle of radius 1, so to calculate values close to 1 or the logarithm is well suited (indeed, it is what is used in many places).

Of course, if you want to solve this in the complex plane, you need to operate the calculation as complex numbers.

Luis Colorado
  • 10,974
  • 1
  • 16
  • 31
  • The computation of log is quite accurate (within 1 ULP). The error is probably from the squaring of the real and imaginary terms and the subsequent square root operation. When I tested the log() routine for values close to 1, the results were quite accurate. – Joseph Arnold Feb 02 '22 at 11:18
  • @JosephArnold, when you square a floating point number, you lose half the significand in relative precision, but if you square root it again, you recover it. This is why I recommended to square root before log(). Anyway, the Taylor series will help you, if you do the calculation of it in complex. Or, as suggested, use the `clog()` funcion. It is there for you :) – Luis Colorado Feb 02 '22 at 11:31
  • 2
    BTW being under 1ULP is not going to be possible, without the help of a better precision calculator... :) – Luis Colorado Feb 02 '22 at 11:37