0

This thread Why can't I get a p-value smaller than 2.2e-16?, begins to answer my problem however my issue is with using the F-distribution to generate p-value less than 2.2e-16.

1-pf(50,3,200)
[1] 0

My goal is to effectively mimic the levene.test from the lawstat package but wrapping the "lite" function in apply to allow 10-100,000 of calculation more efficiently. I have achieved this but have the issue with a few p-value < 2.2e-16. The levene.test function, which employs the anova function is able to deal with these.

Other than going back to my function and editing it to using the anova function over pf, does anyone have any ideas why the pf is limited by anova isn't. If I use the anova function it would double the runtime

George
  • 903
  • 8
  • 22
  • 3
    And you need to generate p-values less than 2.2e-16 because...? I'm not trying to be unhelpful here but from a statistical point of view, why? – Maurits Evers Apr 09 '19 at 23:30
  • I understand the precision of the pvalue here may seem pointless, however in my field there is often adjustments for multiple testing > 100000. There is also the requirement to visualize some of these observations. I'm not too concerned about these p-values, but I was trying to understand why some functions can work around the lower limit threshold, and if I can implement those work arounds – George Apr 09 '19 at 23:41
  • 2
    I'm sorry but that doesn't sound sensible at all. Multiple testing corrections for 10^5 tests is still 11 orders of magnitude (!!) smaller than the precision of your p-value. So that won't make any difference. Btw, in my area of work we often work with similar testing scenarios. In my personal opinion, there is very little point in reporting p-values less than e.g. 10^-5. Multiple testing corrections is a whole new can of worms that I'd avoid at all cost, especially for that many tests that are probably not all independent. – Maurits Evers Apr 09 '19 at 23:46
  • I completely agree with everything you are saying and I'm not stressed about losing any precision below 2.2e-16. The reason this problem arose was trying to plot some -log10(p). I have no illusions that there is no statistical merit in gaining any more precision, but was rather curious at how the functions (`anova`, `t.test` etc) were getting around this – George Apr 10 '19 at 00:01
  • 1
    In response to your question: You can see the [smallest positive floating-point number](https://stat.ethz.ch/R-manual/R-devel/library/base/html/zMachine.html) on your machine with `.Machine$double.eps` which I imagine will be (around) 2.2e-16. So this is a simple result of (limited) machine precision. – Maurits Evers Apr 10 '19 at 00:16
  • @MauritsEvers: You missed an important part of the definition of `.Machine$double.eps`: it's the smallest positive floating-point number x *such that 1 + x != 1*. The last part is important; R can easily hold numbers like `1e-300`, but `1 + 1e-300 == 1`. – user2554330 Apr 10 '19 at 00:45
  • @George: If you posted code there might be the possibility of help but at the moment it's not possible, for me anyway, to understand what you are attempting. There is a `log.p` parameter to `pf` that you have not yet exploited. – IRTFM Apr 10 '19 at 01:06
  • @user2554330 Yes I am perfectly aware of that; `.Machine$double.eps` gives you the precision of numbers *close to zero*. That is exactly what matters here. You can see the minimum-representable normalized positive floating-point value with `.Machine$double.xmin`. There is a [really useful post with some excellent answers](https://stackoverflow.com/questions/24847918/extreme-numerical-values-in-floating-point-precision-in-r) here on SO that provides a lot of details. – Maurits Evers Apr 10 '19 at 01:06
  • @MauritsEvers: I believe you are aware of all of this, but you aren't expressing it clearly. `.Machine$double.eps` is *not* the precision of numbers close to zero, it's the precision of numbers close to one. – user2554330 Apr 10 '19 at 09:58
  • @user2554330 You are absolutely right, and I did indeed express it poorly. Thanks for the clarification. – Maurits Evers Apr 10 '19 at 10:23

0 Answers0