4

I'm trying to obtain diversity indices/estimators and dissimilarity measures for a community abundance matrix comprised of samples taken from multiple assemblages. Specifically, I'm running Anne Chao's function:

SimilarityMult (estimating various similarity indices among N communities). Both richness and abundance-based N-community similarity indices are included" --Anne Chao's SpadeR package.

I use packages:

library(devtools)
install_github('AnneChao/SpadeR')
library(SpadeR)
library(vegan)

ISSUE

My community abundance matrix set-up: 108 species in rows, 144 sites/samples in columns.

I run:

SimilarityMult(bugs,"abundance",q=1,nboot=200,"relative")

which yields:

ERROR CODE "Error in rmultinom(nboot, sum(X2), p[, 2]) : NA in probability vector"

I run:

SimilarityMult(bugs,"abundance",q=2,nboot=200,"relative")

which yields:

ERROR CODE "Error in rmultinom(1, ni[k], p[, k]) : NA in probability vector"

Below I provide the information from which my data matrix can be recreated and used to run problematic code to find sources of error. I share a subset of my data which includes the first 20 rows and first 25 columns.

I run:

dput(bugs[1:20,c(1:25)])

Output:

bugs  <-  structure(list(V1 = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 
0L, 0L, 0L, 8L, 0L, 0L, 12L, 0L, 0L, 0L), V2 = c(0L, 0L, 0L, 
7L, 0L, 1L, 0L, 5L, 0L, 0L, 0L, 0L, 2L, 235L, 0L, 0L, 453L, 8L, 
0L, 0L), V3 = c(0L, 0L, 0L, 13L, 0L, 0L, 0L, 2L, 2L, 0L, 0L, 
1L, 0L, 82L, 0L, 0L, 60L, 1L, 8L, 0L), V4 = c(0L, 0L, 0L, 4L, 
0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 1051L, 0L, 0L, 48L, 58L, 
0L, 0L), V5 = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 
0L, 47L, 0L, 0L, 5L, 3L, 0L, 0L), V6 = c(0L, 0L, 0L, 18L, 0L, 
3L, 0L, 7L, 5L, 0L, 0L, 3L, 1L, 271L, 0L, 0L, 176L, 21L, 0L, 
0L), V7 = c(0L, 1L, 0L, 16L, 0L, 21L, 0L, 0L, 0L, 0L, 0L, 0L, 
0L, 2L, 0L, 0L, 35L, 0L, 0L, 0L), V8 = c(0L, 0L, 0L, 17L, 1L, 
1L, 0L, 2L, 3L, 0L, 0L, 0L, 0L, 52L, 0L, 1L, 28L, 32L, 0L, 0L
), V9 = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 
29L, 0L, 0L, 8L, 3L, 8L, 0L), V10 = c(0L, 0L, 0L, 0L, 0L, 0L, 
0L, 0L, 0L, 0L, 0L, 2L, 0L, 25L, 0L, 0L, 14L, 0L, 0L, 0L), V11 = c(0L, 
12L, 0L, 14L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 61L, 0L, 0L, 
32L, 0L, 6L, 0L), V12 = c(0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 
0L, 0L, 0L, 0L, 152L, 0L, 0L, 8L, 3L, 0L, 0L), V13 = c(0L, 0L, 
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 15L, 
0L, 0L, 0L), V14 = c(0L, 0L, 0L, 14L, 0L, 0L, 0L, 0L, 0L, 0L, 
0L, 0L, 0L, 67L, 0L, 0L, 23L, 10L, 0L, 0L), V15 = c(0L, 2L, 0L, 
5L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 2L, 1L, 0L, 1L, 2L, 4L, 
0L), V16 = c(0L, 0L, 1L, 10L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 
0L, 30L, 0L, 0L, 2L, 18L, 0L, 0L), V17 = c(0L, 0L, 0L, 0L, 0L, 
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 3L, 2L, 2L, 0L), 
    V18 = c(0L, 0L, 0L, 1L, 0L, 1L, 0L, 4L, 0L, 0L, 0L, 0L, 0L, 
    14L, 0L, 0L, 37L, 10L, 0L, 0L), V19 = c(0L, 1L, 0L, 13L, 
    0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 10L, 0L, 0L, 14L, 26L, 
    0L, 0L), V20 = c(0L, 0L, 0L, 6L, 0L, 2L, 0L, 0L, 0L, 0L, 
    0L, 0L, 0L, 18L, 0L, 0L, 2L, 12L, 0L, 0L), V21 = c(0L, 0L, 
    0L, 0L, 0L, 0L, 2L, 0L, 0L, 0L, 0L, 0L, 0L, 41L, 0L, 0L, 
    13L, 0L, 0L, 0L), V22 = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 
    0L, 0L, 0L, 0L, 0L, 101L, 0L, 0L, 15L, 5L, 0L, 0L), V23 = c(0L, 
    0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 15L, 0L, 
    0L, 17L, 0L, 1L, 0L), V24 = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 
    0L, 0L, 0L, 0L, 0L, 0L, 2L, 0L, 0L, 0L, 2L, 0L, 0L), V25 = c(0L, 
    0L, 0L, 0L, 0L, 0L, 0L, 0L, 7L, 0L, 0L, 0L, 0L, 134L, 0L, 
    0L, 19L, 0L, 0L, 0L)), row.names = c("1", "2", "3", "4", 
"5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", 
"16", "17", "18", "19", "20"), class = "data.frame")

Produce dataframe

bugs

Input dataframe 'bugs' into SimilarityMult functions above.

I do not understand why the code fails. What can I do to make the function work properly? Any solution/advice on the matter would be greatly appreciated.

Thank you!

Ian Campbell
  • 23,484
  • 14
  • 36
  • 57
  • Hi thomatostew, welcome to Stack Overflow. It will be much easier to help if you provide at least a sample of your data with `dput(bugsV)` or if your data is very large `dput(bugsV[1:20,])`. You can [edit] your question and paste the output. Please surround the output with three backticks (```) for better formatting. See [How to make a reproducible example](https://stackoverflow.com/questions/5963269/) for more info. – Ian Campbell Jul 03 '20 at 04:50
  • @IanCampbell Thank you for your guidance! I have never posted in stack overflow before and your post formatting suggestions/edits were very helpful. For anyone interested in this post, please, let me know if the data I have included is not helpful in solving this issue. If you require more information please comment! Thank you! – thomatostew Jul 03 '20 at 22:19
  • If I can remember, I'll add a [bounty](https://stackoverflow.com/help/bounty) to your question after the 48 hours have elapsed. Good luck. – Ian Campbell Jul 03 '20 at 23:16
  • Just curious, is there a specific reason you want to use this particular function/package? I plugged your object into `vegdist` and it seems to work just fine using that function for calculating dissimilarity indices (I wanted to see if I could recreate the problem with another library). – cdtip Jul 13 '20 at 04:23

1 Answers1

2

I spent a long while trying to solve your problem including reviewing the source code and the issues noted on GitHub. I noticed you opened an issue there as well, but I suspect the author has abdandoned the project.

I can't figure out why, but if we remove communities 15 and 17, your code executes.

SimilarityMult(bugs[,-c(15,17)],"abundance",q=1,nboot=200,"relative")
#...
#  Pairwise similarity matrix: 
#
#    Horn(i,j) 1       2       3       4       5       6       7       8       9       10       11       12       13       14       15       16       17       18       19       20       21       22       23       
#       1      1.000   0.991   0.931   0.760   0.761   0.952   0.648   0.802   0.796   0.939   0.869   0.715   0.877   0.849   0.555   0.889   0.657   0.571   0.893   0.819   0.983   0.463   0.820   
#       2              1.000   0.900   0.644   0.814   0.930   0.638   0.756   0.772   0.912   0.799   0.734   0.954   0.838   0.634   0.912   0.616   0.654   0.875   0.813   0.961   0.669   0.774   
#       3                      1.000   0.764   0.837   0.955   0.611   0.846   0.887   0.926   0.924   0.808   0.833   0.910   0.723   0.842   0.665   0.732   0.891   0.843   0.959   0.597   0.841   
#       4                              1.000   0.993   0.830   0.286   0.763   0.834   0.849   0.731   0.996   0.513   0.875   0.797   0.629   0.601   0.790   0.909   0.984   0.770   0.873   0.945   
#       5                                      1.000   0.918   0.265   0.807   0.890   0.877   0.800   0.985   0.445   0.914   0.795   0.658   0.573   0.774   0.928   0.998   0.792   0.801   0.948   
#       6                                              1.000   0.623   0.891   0.841   0.955   0.859   0.871   0.853   0.950   0.803   0.911   0.732   0.817   0.926   0.920   0.938   0.797   0.893   
#       7                                                      1.000   0.579   0.341   0.484   0.622   0.299   0.771   0.541   0.345   0.656   0.592   0.457   0.404   0.310   0.587   0.175   0.309   
#       8                                                              1.000   0.776   0.784   0.776   0.749   0.695   0.931   0.923   0.872   0.911   0.932   0.769   0.788   0.782   0.888   0.722   
#       9                                                                      1.000   0.842   0.843   0.837   0.623   0.849   0.723   0.707   0.605   0.722   0.856   0.884   0.888   0.764   0.825   
#       10                                                                              1.000   0.863   0.843   0.738   0.875   0.634   0.773   0.574   0.628   0.946   0.904   0.940   0.528   0.902   
#       11                                                                                      1.000   0.795   0.766   0.879   0.687   0.728   0.676   0.681   0.855   0.802   0.901   0.497   0.792   
#       12                                                                                              1.000   0.394   0.868   0.761   0.561   0.491   0.753   0.912   0.980   0.743   0.743   0.953   
#       13                                                                                                      1.000   0.698   0.354   0.895   0.613   0.391   0.644   0.547   0.848   0.352   0.558   
#       14                                                                                                              1.000   0.889   0.794   0.811   0.887   0.884   0.909   0.847   0.828   0.843   
#       15                                                                                                                      1.000   0.682   0.848   0.956   0.664   0.757   0.571   0.927   0.644   
#       16                                                                                                                              1.000   0.792   0.704   0.706   0.681   0.847   0.743   0.597   
#       17                                                                                                                                      1.000   0.863   0.530   0.565   0.614   0.829   0.439   
#       18                                                                                                                                              1.000   0.647   0.742   0.580   0.913   0.618   
#       19                                                                                                                                                      1.000   0.948   0.904   0.550   0.946   
#       20                                                                                                                                                              1.000   0.835   0.794   0.957   
#       21                                                                                                                                                                      1.000   0.486   0.833   
#       22                                                                                                                                                                              1.000   0.554   
#       23                                                                                                                                                                                      1.000   

I know this is unlikely to fully resolve your issue, but hopefully it will help you or someone else figure it out.

Ian Campbell
  • 23,484
  • 14
  • 36
  • 57
  • It is quite unlikely to be related but I recall constructing similar object in https://stackoverflow.com/questions/17202421/r-vectorized-array-data-manipulation – jangorecki Jul 06 '20 at 14:55