5

I wonder if there is an algorithm for doing that.

Consider sin(60.9^100) for example,

How to make 60.9^100 - 2pi *N in 2 pi range.

Amir Charkhi
  • 768
  • 7
  • 23
aristotll
  • 8,694
  • 6
  • 33
  • 53
  • 4
    Use `BigDecimal` to compute the expression `(60.9^100) modulus 2Pi` to sufficient precision. Then convert it to a `double` and use `Math` to compute the `sine`. (If you compute `60.9^100` as a `double`, you will lose all significant precision.) – Stephen C May 31 '22 at 02:02
  • 5
    implement your own [fast exponentiation](https://en.wikipedia.org/wiki/Exponentiation_by_squaring) with modulo in each iteration – Marat May 31 '22 at 02:32
  • 8
    @Marat note that you need to have 2Pi to far more significant digits than `double` can express. Otherwise, accumulated error will lead to total loss of significance. (Each squaring doubles the error.) – Raymond Chen May 31 '22 at 04:19
  • 1
    is exponent always integer or not? – Spektre May 31 '22 at 10:04
  • @RaymondChen The error would be 64 times (if it doubles with each) after 6 squarings and a bit more after the subsequent calculations. Seems to be no problem for double range? – Sebastian May 31 '22 at 10:17
  • 1
    @Spektre Support it is always integer – aristotll May 31 '22 at 11:57
  • @Sebastian oops you're right. However the problem is worse: the modulus operation subtracts numbers of similar magnitude, so you get catastrophic loss of significance, which squares at each iteration. If you run the program, you see that the running product collapses to zero after 17 iterations. – Raymond Chen May 31 '22 at 14:26
  • @RaymondChen So you probably loose 3 mantissa bits per iteration? Now depends on what accuracy the OP needs and what range the exponents have. – Sebastian May 31 '22 at 16:59
  • With (pi/2)² one would lose 2.22 bits, but could be a lot more with unlucky numbers. – Sebastian May 31 '22 at 17:06
  • 1
    @RaymondChen Even worse, [modulo in each iteration completely breaks](https://stackoverflow.com/q/72452006/12671057). – Kelly Bundy May 31 '22 at 17:34
  • 3
    @KellyBundy Good point. Modulus does not commute with multiplication if the base is non-integral or if the values you are multiplying are non-integral. For example, 1.5 * (2 mod 2) = 0, but (1.5 * 2) mod 2 = 1. – Raymond Chen May 31 '22 at 17:48
  • 1
    afterthought: there is probably a way to transform this exponentiation into a multiplication problem, in complex numbers – Marat May 31 '22 at 18:14
  • Probably better with Fourier transform https://www.cs.rug.nl/~ando/pdfs/Ando_Emerencia_multiplying_huge_integers_using_fourier_transforms_paper.pdf you can also save half the effort by dividing by the 100th root of 2pi or equivalently by exp((ln(2pi))/100). Then only the fractional part of the result is needed. – Sebastian May 31 '22 at 18:27

4 Answers4

4

Here are two semi-automatic ways. They still need manual configuration, depending on the input. Could perhaps be made fully automatic, but at least they provide some way at all (and results to check further methods with). Edit: posted a fully automatic on by now.

Using Python's decimal module and its pi recipe, apparently 60.9^100 % 2pi is about 0.4826 (which can then be given to sin). Results for computing with 180 to 290 digits precision (code is at the end):

180 0.52113386128181643243087541425218797675113893601959695702254815952665...
190 0.48262316221535366629016856570286348468626721388098119253199769818223...
200 0.48262316221828443267196371207773451732899712100881145938907295835606...
210 0.48262316221828443267246775208563277802202330286500415343966588161647...
220 0.48262316221828443267246775208566344687793590859019274697600998645752...
230 0.48262316221828443267246775208566344687793479998648411772237563268709...
240 0.48262316221828443267246775208566344687793479998648362494580989872984...
250 0.48262316221828443267246775208566344687793479998648362494580991864728...
260 0.48262316221828443267246775208566344687793479998648362494580991864728...
270 0.48262316221828443267246775208566344687793479998648362494580991864728...
280 0.48262316221828443267246775208566344687793479998648362494580991864728...
290 0.48262316221828443267246775208566344687793479998648362494580991864728...

Wolfram Alpha fails at it, computing "zero". But for exponent 30 it still shows a valid result, and we match:

WolframAlpha: 6.0148312092022347033088447399833343520115646793565705028401966310...
Mine:         6.01483120920223470330884473998333435201156467935657050284019663107410...

Another way, using the first 1001 digits of pi copied from some site, and using integers until the very end, gives 0.48262316221828444 (Try it online!):

a, b = 609, 100
pi = 31415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989
print(a**b * 10**(1000-b) % (2*pi) / 10**1000)

This does the operations with large integers, "scaled up" by 10^1000, until the final downscaling division gives a float.

Third way, using Python's fraction, also resulting in 0.48262316221828444 (Try it online!):

from fractions import Fraction

a, b = Fraction('60.9'), 100
pi = Fraction('3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989')
print(float(a**b % (2*pi)))

Code using decimal (Try it online!):

from decimal import *

a, b = '60.9', 100

def pi():
    """Compute Pi to the current precision.

    >>> print(pi())
    3.141592653589793238462643383

    """
    getcontext().prec += 2  # extra digits for intermediate steps
    three = Decimal(3)      # substitute "three=3.0" for regular floats
    lasts, t, s, n, na, d, da = 0, three, 3, 1, 0, 0, 24
    while s != lasts:
        lasts = s
        n, na = n+na, na+8
        d, da = d+da, da+32
        t = (t * n) / d
        s += t
    getcontext().prec -= 2
    return +s               # unary plus applies the new precision

for prec in range(100, 300, 10):
    setcontext(Context(prec=prec, Emax=MAX_EMAX, Emin=MIN_EMIN))

    x = Decimal(a) ** b

    try:
        print(prec, str(x % (2*pi()))[:70] + '...')
    except:
        pass
Kelly Bundy
  • 23,480
  • 7
  • 29
  • 65
  • I added results for 600, 1000 and 10000 bits looks like the 1000 matches you so 600 is still too low for this... – Spektre Jun 02 '22 at 16:06
  • @Spektre Nice to see them match, thanks. And yes, 600 is a little too low, but not by much. 1000 is already overkill if we use the standard `sin` function afterwards. See my [other answer](https://stackoverflow.com/a/72465792/12671057) for some reasoning about how much extra to use. – Kelly Bundy Jun 02 '22 at 16:17
2

Using arbitrary precision floats and 600 fractional bits for division precision leads to:

 a           = 60.89999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999996385120202345673823
 b           = 100
 pi2         = 6.28318530717958647692528676655900576839433879875021164194988918461563281257241799725606965068423413596429617302656461329418768921910116446345071881625696223490056820540387704221111928924589790986076393
 a^b         = 28955379630372944287405172428351524098232652637160696453610571103951984391040712391370032514984003111097712417114477560530145388673931838194276269928567117043004481189909961875857.75649514478978553782864231122006591641737915748789056749200619756952612910126893035405495135890665595449781429534911495188619709183484163296347386211939610629876033802394440562845818548327528930373773
 a^b mod pi2 = 0.48090443944555460103493387524923864059291388194583637380891303988368325373623569292130225575175455300500465326465846619858936543877561381898189967500531981888897878580530179993696579483550649680773777

Which will serve as reference for double computation:

pow(60.9,100) = 2.895537963037287827000000000000000000000e+178

As you can see the rounding error is at magnitude ~10^160 which is hugely bigger than the wanted modulo leading to completely off results...

Using power by squaring along with modulo (modpow) to keep the subresults small like in Blackgaurd's answer:

//---------------------------------------------------------------------------
double mod(double a,double p)
    {
    a/=p;
    a-=floor(a);
    a*=p;
    return a;
    }
//---------------------------------------------------------------------------
double modpow(double a,unsigned int b,double p)
    {
    int i;
    double d;
    a=mod(a,p);
    d=mod(1.0,p);
    for (i=0;i<32;i++)
        {
        d=mod(d*d,p);
        if (DWORD(b&0x80000000))
            {
            d=mod(d*a,p);
            }
        b<<=1;
        }
    return mod(d,p);
    }
//---------------------------------------------------------------------------

Looks slightly better but still way off as during each iteration we lost precision bits of mantissa:

modpow(60.9,100,2.0*M_PI) = 0.000000000004526729

However as Kelly Bundy points out modulo on floats can't be used inside or before the pow iterations as it breaks the math (unlike on integers) making this approach unusable.

So the only way (unless there is some nice math identity to be exploited for this) to compute this is to use much bigger bitwidth numbers (either fixed point or float). The result of 60.9^100 has 178 decadic digits so you need to use at least:

178/log10(2) = 591.3 bits

I chose 600 bits for mantissa in the computations above (using power by squaring without mod inside iteration). Beware you need to have PI constant in similar bitwidth in order to make this work.

So the answer to your question is: No I see no way to compute this on double and you have to use big numbers instead.

Which depends on language you want to implement this in...

[Edit1] here the stuff computed on different accuracy bitwidth:

  600 bits: a^b mod p = 0.48090443944543787059955763581639220817922063926527443008616966832526276298682799066248021870151726073022025792461359895510004764246892419475030025711547915571989225146813845497084456179528624932006095
 1000 bits: a^b mod p = 0.48262316221828443267246775208566344687793479998648362494580991864728550767313153551910251537428629715050683896930935124670316833678465659261329197846231478482857003648065425433322518882397051713086293
10000 bits: a^b mod p = 0.48262316221828443267246775208566344687793479998648362494580991864728550767313153551910251537428629715050683896930935124670316834694077218601842581288592371259021844778135739430850760935223120750400005
Spektre
  • 49,595
  • 11
  • 110
  • 380
  • With 600 bits you get close to my result, do you get closer with more bits? And it looks like you didn't see my [comment](https://stackoverflow.com/questions/72441507/how-to-calculate-sinab-where-ab-can-be-a-very-large-double-number/72441915?noredirect=1#comment127990782_72441507) / [question](https://stackoverflow.com/q/72452006/12671057) pointing out that using modulo to "keep the subresults small" is *entirely* wrong... – Kelly Bundy Jun 01 '22 at 12:35
  • 1
    Because of the above, you get [completely wrong results starting at exponent b=2 already](https://ideone.com/Icrirx) (see the stdout section there). Not because "each iteration we lost precision bits of mantissa" but because the *math* is wrong. This is btw exactly what I had done, where I noticed the issue and what led me to create that question. – Kelly Bundy Jun 01 '22 at 13:17
  • @KellyBundy You're right I miss that its wrong even if you do `((a%p)^b)%p` instead of `(a^b)%p` where `b>1` which works on integers ... but I keep feeling there is some math go around trick for this we miss... Yes you can enhance precision by increasing bits you know each squaring is doubling the number of bits needed however its also a matter of implementing the `60.9` which is not representable in binary and also floating point division precision ... I do not know how your lib implements it mine I set for 600 fract bits and `a=609; a/=10;` to ensure max precision on selected bitwidth – Spektre Jun 01 '22 at 15:04
  • Yeah, I can imagine there's still some "trick", but I don't see one. (There was a comment with another attempted "trick" earlier, but they deleted it when I pointed out that it was wrong even for b=1). I don't know the exact details of Python's `decimal`, but it uses decimal instead of binary and thus does represent 60.9 exactly (when given as *string* like I did). Your 600 bits btw are a little more than my 180 digits, explaining why you get a better result than my 180 digits but a worse result than my 190 and more digits. Maybe use a few more bits? Would be nice to see our results matching. – Kelly Bundy Jun 01 '22 at 15:13
  • @KellyBundy My guts are telling me a solution might be in complex domain and or in logarithms or Taylor series expansion converting the equation to different form ... however that is not my cup of tea the best I could do for similar task was this: [How to express tetration function, for complex numbers](https://stackoverflow.com/a/56735614/2521214) but that solution is not mathematical but geometrical instead ... maybe similar approach could work for this too – Spektre Jun 01 '22 at 15:17
  • @KellyBundy just found that this [((x % m) * (y % m)) % m == (x * y) % m](https://stackoverflow.com/a/37826126/2521214) is not true for non integers which more or less prohibits modpow computation in standard way ... there is also the `log/exp` approach too but that will lead to nowhere too ... – Spektre Jun 01 '22 at 15:29
  • I didn't fully read/understand your tetration stuff, but I think you used the most significant digits there, right? Not the digits far further "to the right". Btw my very first thought was also computing it differently, computing the sine myself [with series](https://en.m.wikipedia.org/wiki/Sine_and_cosine#Series_definitions), but that (and the other definitions there, including continued fractions), seemed even harder. – Kelly Bundy Jun 01 '22 at 15:43
  • Multiplying bigint numbers is an element-wise multiplication in fourier-space (method of convolution theorem in fourier space; polynomial division is convolution). So we transform the list of digits of 60.9 (notated in any base) into the fourier domain and do fourier-space element-wise ^100. Then we transform back and get the digits of the result. If `60.9` was normalized by dividing by the 100th root of 2pi in the beginning, we only need the fractional part of the result. If the chosen base for the digit-wise representation is an integer (e.g. 2), we can just sum up the fractional half. – Sebastian Jun 01 '22 at 15:56
  • As we would do element-wise ^100, the Fourier transform and its inverse have to be calculated with high precision. – Sebastian Jun 01 '22 at 16:03
  • @Sebastian how is that different to computing on big bitwidth? once you hit the thresholds NTT is used for multiplication ... which is the same as your FFT approach – Spektre Jun 02 '22 at 09:17
  • @Spektre Just thinking about ways, how to use it for powers ... – Sebastian Jun 02 '22 at 15:41
1

Full Python implementation combining my first answer's semi-autimatic decimal method with Spektre's observation for how to calculate the required precision.

Our input to sin is ab % 2π, which lies in [0, 2π), so its integer part has one digit. Beyond that one digit, I want 15 fractional digits, as Python's float have about 16 digits precision. So I use 30 digits more than the integer part of ab in order to get 30 fractional digits (around 15 would suffice, but a few more don't hurt, and I don't want to try to compute exactly how many are needed).

from math import log10, sin
from decimal import setcontext, Context, getcontext, Decimal

def bigsin(a, b):
    prec = int(log10(a)*b) + 30
    setcontext(Context(prec=prec, Emax=prec + 10))
    a = Decimal(str(a))
    return sin(float(a**b % (2*pi())))

def pi():
    """Compute Pi to the current precision.

    >>> print(pi())
    3.141592653589793238462643383

    """
    getcontext().prec += 2  # extra digits for intermediate steps
    three = Decimal(3)      # substitute "three=3.0" for regular floats
    lasts, t, s, n, na, d, da = 0, three, 3, 1, 0, 0, 24
    while s != lasts:
        lasts = s
        n, na = n+na, na+8
        d, da = d+da, da+32
        t = (t * n) / d
        s += t
    getcontext().prec -= 2
    return +s               # unary plus applies the new precision

print(bigsin(60.9, 100))

Output (Try it online!):

0.4641043156966329

Note this differs from my first answer because here I computed the sine value, not just the moduloed power.

Also note that while I start with 60.9 as a float (which doesn't represent 60.9 exactly), the str turns that into the string 60.9, so that the Decimal then does represent 60.9 exactly.

Kelly Bundy
  • 23,480
  • 7
  • 29
  • 65
0

Here is an example of the comment by @Marat in c++:

double dmod(double x, double y) {
    return x - (int)(x/y) * y;
}

double bin_pow(double a, double exp, double mod) {
    if (!exp) return 1.0;
    double ret = 1;
    while (exp > 0) {
        if (exp & 1) ret = dmod(ret * a, mod);
        a = dmod(a * a, mod);
        exp >>= 1;
    }
    return ret;
}
Blackgaurd
  • 608
  • 6
  • 15
  • 1
    You're performing a bitshift on a `double`? This doesn't divide by 2 ... and would lead to situations where `exp` is much smaller than 1, but that loop won't let it exit. – Elliott May 31 '22 at 04:25
  • The precision is lost https://stackoverflow.com/questions/72441507/how-to-calculate-sinab-where-ab-can-be-a-very-large-double-number/72441915#comment127973811_72441507 – aristotll May 31 '22 at 05:47
  • 1
    So make the exponent an `int`? – Sebastian May 31 '22 at 10:19
  • Probably numerically better to keep the number within -pi/2..pi/2? – Sebastian May 31 '22 at 17:09
  • even if you repair the power by squaring code this way of computing the modpow will have rounding error hugely bigger than the PI itself leading to entirely wrong results... It would work if you use arbitrary precision float instead of double ... – Spektre Jun 01 '22 at 08:57