Summing using Matrix::colSums()
gives different results than apply(, MARGIN = 2, FUN = sum)
in the code below.
library(Matrix)
# Initialise RNG
set.seed(3737)
# Parameters
nr <- 2000 # Number of rows
nc <- 5 # Number of columns
nz <- round(nr / 10) # Number of non-zero entries in each column
# Create a sparse matrix with given numbers or rows, columns and non-zero entries
a.sparse <- Matrix(0, nrow = nr, ncol =nc, sparse = TRUE)
for( c. in 1:nc ) {
a.sparse[sample(nr, nz), c.] <- rnorm(nz)
}
# Column sums are inconsistent between colSums and apply
colSums(a.sparse) - apply(a.sparse, MARGIN = 2, FUN = sum)
# Column sums are consistent between colSums and apply in dense case
a.dense <- as.matrix(a.sparse)
class(a.dense)
colSums(a.dense) - apply(a.dense, MARGIN = 2, FUN = sum)
For example in the colSums(a.sparse) - apply(a.sparse, MARGIN = 2, FUN = sum)
line I have as output:
3.907985e-14 2.486900e-14 1.421085e-13 -1.421085e-13 0.000000e+00
but the dense case works correctly. Can anyone shed any light on what is causing the inconsistency here? The errors are orders of magnitude greater than .Machine$double.eps
and are causing instability in the software I am analysing.