Background: I am simulating data from a Cox proportional hazards model. There is a method provided by Bender et. al (2005). It is explained and implemented in R very well in this post. The intricacies of Cox models are not important to this question.
Problem: I need to evaluate exp(-1 * X * beta). Due to some of the specifics of the problem I'm working on, either X or beta needs to be relatively large, somewhere between 50 and 100. I believe this causes accuracy problems.
X.stable = seq(from = 0, to = 1, by = .1)
beta = 1
exp(-1 * X.stable * beta)
1.0000000 0.9048374 0.8187308 0.7408182 0.6703200 0.6065307 0.5488116 0.4965853 0.4493290 0.4065697 0.3678794
X.unstable = seq(from = 0, to = 100, by = 5)
exp(-1 * X.unstable * beta)
1.000000e+00 6.737947e-03 4.539993e-05 3.059023e-07 2.061154e-09 1.388794e-11 9.357623e-14 6.305117e-16 [9] 4.248354e-18 2.862519e-20 1.928750e-22 1.299581e-24 8.756511e-27 5.900091e-29 3.975450e-31 2.678637e-33 [17] 1.804851e-35 1.216099e-37 8.194013e-40 5.521082e-42 3.720076e-44
I don't believe that the values at the end are correct. Did R really calculate exp(x) to 44 digits? Eventually, I plug these values in a function, and I think I'm getting nonsense answers. Am I putting to little faith in R or am I correct. If this is in fact a problem, is there anything I can do about it?
I have considered using only the first 2-3 terms of the Taylor series representation of e^x. I could do that without a problem. I do not know if that will solve my issues though, because I assume R is doing something like that under the hood anways,
To boil down my question,can I get precise values from the exp() function, even if the arguments are large?