Suppose I have a arbitrary probability matrix P
like below,
P = matrix(c(0.3,0.2,0.2,0.2,0.3,0.2,0.2,0.2,0.3),3,3)
P
[,1] [,2] [,3]
[1,] 0.3 0.2 0.2
[2,] 0.2 0.3 0.2
[3,] 0.2 0.2 0.3
For single adjacency matrix, it is generated like (unweighted, no self-loof)
tem = matrix(runif(3^2), nrow = 3)
tmpG = 1 * (tmpmat < P)
tmpG[lower.tri(tmpG)] <- 0
tmpG <- t(tmpG) + tmpG - diag(diag(tmpG))
However, what if I need to generate 100 adjacency matrix, so I write down the following code
G = list()
for (i in 1:rep) {
tmpmat = matrix(runif(n^2), nrow = n)
tmpG = 1 * (tmpmat < P)
tmpG[lower.tri(tmpG)] <- 0
tmpG <- t(tmpG) + tmpG - diag(diag(tmpG))
if (noloop) {
diag(tmpG) = 0
}
G[[i]] = tmpG
}
In my case, the n >10000
and T = 1000
, so it is extremely slow, any better idea to improve this?
>})` will be faster, but by the time your matrices get big it probably won't matter. I don't think you'll get much speed without improving your algorithm. Maybe you could work on only operating the upper triangle?
– Gregor Thomas Oct 27 '20 at 03:25