0

I've been playing around with calculating the square root of 2 and the like. It's easy to come up with an algorithm that will produce n correct binary digits. What I'd like help with is determining how many binary digits I need to get m correct decimal digits? m Binary digits will get me m Decimal digits, but the m decimal digits may not all be correct yet.


EDIT: I've determined that the lower bound on the binary precision = ceil(log2(10^m)).

Thinking about it there might not be a strict upper-bound, since a carry from any lower power of 2 (when converting to base 10) could potentially effect any higher digit base 10.

This may thus be a dynamic problem that requires evaluating the fractional expansion at m binary digits and determining which additional binary digits could potentially cause a carry in base 10.


Edit 2: I was probably overthinking this. After the initial calculation I can keep adding (1x10^(-precision)) and squaring the result until I exceed 2 - and then subtract (1x10^(-precision)) and I'll have my answer. Nevertheless I am still interested in finding/developing such an algorithm :)

  • You mean precision = ceil(log2(10^-m)) with a minus in front of m, isn'it? – Michael Baudin May 12 '20 at 12:46
  • m represents the number of digits (positive). if m = 5 decimal digits, then I need at least a precision = ceil(log2(10^5)) = 17. On the other hand, ceil(log2(10^-5)) = -16, which is not what I want. Id need to change the equation to -floor(log2(10^-m)). Of course, without the leading negative sign, the latter equation gives you a negative precision value - which might be preferable. – Ryan Pierce Williams May 12 '20 at 15:58

2 Answers2

1

what method you are using?

I am assuming binary search of x in y = x^2

integer part is limited by the result sqrt(y) and cannot be cut otherwise result would be wrong. However the x is limited by half the bits of y so:

ni2 = log2(|y|)

fractional part is tricky see:

but after the nonlinear start of first digits the dependence stabilizes here reversed formula from linked answer:

nf2 = (((nf10-7.810)/9.6366363636363636363636)+1.0)<<5;
  • ni2 is integer part binary bits/digits
  • nf2 is fractional part binary bits/digits
  • nf10 is fractional part decadic digits

btw I used 32 bit aligned values as that is what I use for my arithmetics so:

9.6366363636363636363636 = 32/0.30102999566398119521373889472449
0.30102999566398119521373889472449 = log10(2)
Spektre
  • 49,595
  • 11
  • 110
  • 380
  • I developed my own integer square root algorithm. I'm then calculating integer_sqrt(2^(p+1)). If needed, you can divide by 2^p to put it into a fractional format. – Ryan Pierce Williams May 12 '20 at 15:30
  • Edit: integer_sqrt(2^(2p + 1)) – Ryan Pierce Williams May 12 '20 at 16:07
  • @RyanPierceWilliams You might like to take a look at this: [How to get a square root for 32 bit input in one clock cycle only?](https://stackoverflow.com/a/34657972/2521214) mine is 16T but still a beauty without any higher math operations not even multiply – Spektre May 12 '20 at 21:07
  • That looks like an interesting algorithm :) I'm sticking with my own sqrt algorithm that I developed for the purposes of my research (much better than a binary search), but I could see applications where that algorithm could be beneficial – Ryan Pierce Williams May 12 '20 at 22:42
1

Let x be a real and y be its approximation.

Let RE be the relative error of y with respect to x:

RE(x, y) = abs(x - y) / abs(x)

Let b be a nonnegative integer. The Log-Relative Error in base b is defined as:

LREb(x, y) = -logb(RE(x, y))

where logb is the base-b logarithm:

logb(z) = log(z) / log(b)

for any nonnegative z.

The LRE in base b represents the number of common digits between x and y. Here, the "number of correct digits" is not an integer, but a real number: this will simplify the next calculations avoiding the need for ceil and floor functions, provided that we accept statements such as : "y has 2.3 correct digits with respect to x". More precisely, if x and y have q common base b digits, then:

LREb(x, y) >= q - 1

With these equation, if the relative error has an upper bound, then the LREb has a lower bound. More precisely, if:

RE(x, y) <= epsilon

then:

LREb(x, y) >= -logb(epsilon)

Also, if the number of correct digits in base 10 is LRE10 = p, then RE = 10^-p, which implies that the number of correct digits in base 2 is:

LRE2 = -log2(10^-p)

Michael Baudin
  • 1,022
  • 10
  • 25
  • It seems like to be able to use these equations, however, you need to know x before you can evaluate the approximation y. Granted the square root of 2 has been published to like 10 trillion decimal digits; but I'd prefer an approach that doesn't require having the actual value (to so many digits) in order to determine if the estimate is correct – Ryan Pierce Williams May 12 '20 at 15:20
  • 1
    In order to estimate the LRE from the exact value x and its approximation y, you need indeed, to know the value of x. This is what we commonly do to assess the quality of an algorithm on a validation test case with a known exact solution x. However, you do not necessarily need to know x: knowing the relative error is sufficient. This can be done if the algorithm we use guarantees a known relative error. – Michael Baudin May 12 '20 at 19:35
  • I see. I will have to study up more on this. Thank you very much :) – Ryan Pierce Williams May 12 '20 at 20:52