3

I would like to perform a bootstrapped paired t-test in R. I have tried this for multiple datasets that returned p<.05 when using a parametric paired t-test however when I run the bootstrap I get p-values between 0.4 and 0.5. Am I running this incorrectly?

   differences<-groupA-groupB
   t.test(differences) #To get the t-statistic e.g. 1.96

   Repnumber <- 10000                  
   tstat.values <- numeric(Repnumber)       
   for (i in 1:Repnumber) {
     group1 = sample(differences, size=length(differences), replace=T)
     tstat.values[i] = t.test(group1)$statistic
   }

   #### To get the bootstrap p-value compare the # of tstat.values
   greater (or lesser) than or equal to the original t-statistic divided
   by # of reps:

   sum(tstat.values<=-1.96)/Repnumber

Thank you!

StupidWolf
  • 45,075
  • 17
  • 40
  • 72
Sebastian112
  • 41
  • 1
  • 4

1 Answers1

1

It looks like you're comparing apples and oranges. For the single t-test of differences you're getting a t-statistic, which, if greater than a critical value indicates whether the difference between group1 and group2 is significantly different from zero. Your bootstrapping code does the same thing, but for 10,000 bootstrapped samples of differences, giving you an estimate of the variation in the t-statistic over different random samples from the population of differences. If you take the mean of these bootstrapped t-statistics (mean(tstat.values)) you'll see it's about the same as the single t-statistic from the full sample of differences.

sum(tstat.values<=-1.96)/Repnumber gives you the percentage of bootstrapped t-statistics less than -1.96. This is an estimate of the percentage of the time that you would get a t-statistic less than -1.96 in repeated random samples from your population. I think this is essentially an estimate of the power of your test to detect a difference of a given size between group1 and group2 for a given sample size and significance level, though I'm not sure how robust such a power analysis is.

In terms of properly bootstrapping the t-test, I think what you actually need to do is some kind of permutation test that checks whether your actual data is an outlier when compared with repeatedly shuffling the labels on your data and doing a t-test on each shuffled dataset. You might want to ask a question on CrossValidated, in order to get advice on how to do this properly for your data. These CrossValidated answers might help: here, here, and here.

Community
  • 1
  • 1
eipi10
  • 91,525
  • 24
  • 209
  • 285