0

I have notices that dunn.test package and DescTools (p.182) provide the Dunn's Test for e post hoc pairwise multiple comparisons procedure appropriate to follow the rejection of a Kruskal-Wallis test.

What are the differences between them?

I have preformed both tests on my data, with the Bonferroni correction and I get quite different results:

DunnTest(mydata$value, mydata$group, method =  "bonferroni") #Desc Tools package

Dunn's test of multiple comparisons using rank sums : bonferroni  

    mean.rank.diff    pval    
B-A      50.721785  0.0010 ***
C-A     -51.035983  0.1361    
D-A      -8.332766  1.0000    
C-B    -101.757768 2.4e-06 ***
D-B     -59.054552 3.2e-05 ***
D-C      42.703216  0.3202    
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

> prg_diff <- dunnTest(mydata$value~ mydata$group, kw = TRUE, method =  "bonferroni") # bonferroni adjustment for multiple testing
> prg_diff
Dunn (1964) Kruskal-Wallis multiple comparison
  p-values adjusted with the Bonferroni method.

  Comparison          Z      P.unadj        P.adj
1      A - B -3.7648546 1.666460e-04 9.998760e-04
2      A - C  2.2788603 2.267537e-02 1.360522e-01
3      B - C  5.0702968 3.971958e-07 2.383175e-06
4      A - D  0.5096668 6.102849e-01 1.000000e+00
5      B - D  4.5489775 5.390721e-06 3.234432e-05
6      C - D -1.9319401 5.336689e-02 3.202014e-01
> print(prg_diff,dunn.test.results=TRUE)
  Kruskal-Wallis rank sum test 

 data: x and g 
 Kruskal-Wallis chi-squared = 45.1676, df = 3, p-value = 0 


                              Comparison of x by g                               
                                  (Bonferroni)                                   
 Col Mean-| 
 Row Mean |          A          B          C 
 ---------+--------------------------------- 
        B |  -3.764854 
          |    0.0010* 
          | 
        C |   2.278860   5.070296 
          |     0.1361    0.0000* 
          | 
        D |   0.509666   4.548977  -1.931940 
          |     1.0000    0.0000*     0.3202 

 alpha = 0.05 
 Reject Ho if p <= alpha 

Why are the results different?

How is it possible that DunnTest yields a pval= 1.000?

How can the results be interpreted?

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
user1607
  • 531
  • 7
  • 28
  • The results are the same, this is a frequent duplicate. – Rui Barradas Mar 10 '19 at 17:29
  • @RuiBarradas the answer you suggested does not discuss the interpretation of Dunn's Tests, nor the p-value of 1 – user1607 Mar 10 '19 at 19:05
  • You are asking two questions at the same time. As for the results of both tests, they **are the same** within floating-point accuracy, be it of the internal representation of the double precision floats (64 bits) or of the algorithms involved. That is the duplicate. As for the p-value equal to `1`, it is perfectly possible and impossible for us to tell the reason why without seeing the data, which you have not posted. See [How to make a great R reproducible example](https://stackoverflow.com/questions/5963269/how-to-make-a-great-r-reproducible-example). – Rui Barradas Mar 10 '19 at 19:48

0 Answers0