0

In R, I have a vector of logged values (denoted log_w).

I need to calculate sum(exp(log_w)) however due to the sizes of the values held in log_w, as soon as I exponentiate log_w I get a vector of zeros.

Example in R, showing a small vector of similar values:

    log_w<-c(-14781.5000092473, -1.68703503244454e+46, -4.24636436410052e+36, 
             -1.90459568391779e+30, -6565872478811.07, -1.58627856636904e+32, 
             -1.41504360955663e+84, -9.09553094112168e+52, -1.8247785275833e+40, 
             -10566415795189.4)
    exp(log_w)
    sum(exp(log_w))

I am aware of the LogSumExp (https://en.wikipedia.org/wiki/LogSumExp) to calcuate the log of a sum of exponentiated values, however I cannot see how this can be used to calculate the sum of a vector of exponentiated logged values. Are there any other similar approximations that anyone could point me towards to calculate the sum of the exponential of logged values? Any advice greatly appreciated.

EDIT

Further information - log_w are logged weights. I need to be able to normalise the weights, i.e. calculate w/sum(w), however I currently only have the logged weights (log(w)).

EDIT

Using library Brobdingnag (https://cran.r-project.org/web/packages/Brobdingnag/index.html) I can calculate exp(log_w) however using that package, sum(exp(log_w)) results in +exp(0)

mes
  • 43
  • 7
  • 2
    none of those numbers come even close to fitting in double precision once exponentiated... what are you hoping to accomplish exactly? – MichaelChirico Mar 26 '20 at 15:04
  • 1
    With a list of numbers like that, pick the largest and exponentiate that. The sum of the others will be a rounding error compared to the biggest. – Gregor Thomas Mar 26 '20 at 15:08
  • I've added more information in the question – mes Mar 26 '20 at 15:09
  • See https://stackoverflow.com/questions/22466328/how-to-work-with-large-numbers-in-r – Ian Campbell Mar 26 '20 at 15:11
  • 2
    ...but even your biggest, -14781, [exponentiated is 4.9e-6420](https://www.wolframalpha.com/input/?i=exp%28-14781%29). The next biggest is the -65658... and even Wolfram Alpha won't compute it. Setting the weight of the biggest to1 and the rest to 0 will be accurate to, probably, 20 decimal places. – Gregor Thomas Mar 26 '20 at 15:14

0 Answers0