0

Inspired in these questions 1, 2 . I'm trying to turn a data.table into adjacency matrix/edgelist and then into an igraph object. I have a dataset with two columns (A, B) that serve as IDs for the purpose of pairing. In other words, A represents the links, and B contains the nodes or vertices. In my dataset, the unique length of each column is 25352 x 75352. These will create a big network, therefore, I'm trying to find the most efficient way to get either an adjacency matrix or an edgelist. I have tried these methods so far:

library(data.table)
library(dplyr)
library(microbenchmark)
n <- 1000
set.seed(123634)
DT <- data.table(A=replicate(n, paste0(sample(LETTERS, 2), collapse = "")),
                B=replicate(n, paste0(sample(LETTERS, 4), collapse = "")))
lapply(DT, function(x){length(unique(x))})   
$A
[1] 503

$B
[1] 998

### `table + crossprod` Method (adjecency matrix):
fn1 <- function(DT) {
  crossprod(table(DT))
}

### `dcast + crossprod` Method (adjecency matrix):
fn2 <-
  function(DT) {
    crossprod(as.matrix(dcast(
      DT, A ~ B, value.var = "B", fun.aggregate = length
    )[, -1]))
  }

### `xtabs + tcrossprod` Method (adjecency matrix):
fn3 <- function(DF) {
  tcrossprod(xtabs( ~ B + A, DT))
}

### `merge` Method (edge list):
fn4 <-
  function(DT) {
    temp <- merge(DT, DT, by = "A", allow.cartesian = TRUE)
    temp[temp$B.x != temp$B.y , -1]
  }

### `dplyr` Method (edgelist):
fn5 <- function(DT) {
  DT %>% group_by(A) %>%
    filter(n() >= 2) %>% group_by(A) %>%
    do(data.frame(t(combn(.$B, 2)), stringsAsFactors = FALSE))
}

Update 1: following comment of @Axeman

### `merge` Method (edge list):
fn4 <-
  function(DT) {
    setkey(DT, A)
    temp <- merge(DT, DT, by = "A", allow.cartesian = TRUE)
    temp[temp$B.x != temp$B.y , ]
  }
### `full_join + filter`
fn6 <-  function(DT) {
    full_join(DT, DT, by = 'A') %>% filter(B.x != B.y)
  }

Results 1

microbenchmark(fn1(DT), fn2(DT), fn3(DT), fn4(DT), fn5(DT), fn6(DT), times = 100)
    expr        min         lq       mean     median         uq        max neval   cld
 fn1(DT) 291.754120 293.959476 304.203825 294.875436 300.686430 373.804013   100    d 
 fn2(DT) 346.626929 349.101024 367.754884 350.903514 370.477299 448.036178   100     e
 fn3(DT)   9.969924  10.420903  14.692905  10.784544  11.451784  78.009518   100  b   
 fn4(DT)   1.816473   2.156643   2.430527   2.366402   2.504144   4.551233   100 a    
 fn5(DT) 125.481956 133.189609 157.177028 137.107701 195.092453 297.355731   100   c  
 fn6(DT)   2.339659   2.719236   3.058402   2.985036   3.138265   5.468647   100 a  

The merge in (fn4) is faster, any ideas or suggestions will be very much appreciated.

Warning:

The fn4 and fn6, which are faster rely on the cartesian product of merge and they create duplicated connections. Moreover, because of temp$B.x != temp$B.y, all non-connected vertices are removed from the graph, this can also be misleading.

n <- 5
set.seed(123634)
DT <- data.table(A=replicate(n, sample(1:2, 1)),
                 B=replicate(n, paste0(sample(LETTERS[1:3], 2), collapse = "")))
    DT
   A  B
1: 2 AB
2: 2 AC
3: 1 AC
4: 1 AB
5: 2 BA

## Method 1
get.adjacency(a)
a <- graph_from_adjacency_matrix(fn1(DT), mode = "undirected")
a <- simplify(a, remove.multiple = F, remove.loops = TRUE)
get.adjacency(a)
   AB AC BA
AB  .  2  1
AC  2  .  1
BA  1  1 

## Method 4
c <- graph_from_data_frame(fn4(DT), directed=F)
get.adjacency(c)
   AB AC BA
AB  .  4  2
AC  4  .  2
BA  2  2  .

## Method 6
f <- graph_from_data_frame(fn6(DT)[,2:3], directed=F)
get.adjacency(f)
   AB AC BA
AB  .  4  2
AC  4  .  2
BA  2  2  .

Update 2: Correcting duplicates and accounting for disconnected nodes.

fn4 <- function(DT) {
  setkey(DT, A)
  temp <- merge(DT, DT, by = "A", allow.cartesian = TRUE)[, 2:3]
  setorder(temp,+B.x)
  get.adjacency(simplify(
    graph_from_data_frame(temp, directed = F),
    remove.multiple = F,
    remove.loops = TRUE)) * 1 / 2
}
fn6 <-  function(DT) {
  full_join(DT, DT, by = 'A')[2:3] %>%
    setorder(+B.x) %>%
    graph_from_data_frame(directed = F) %>%
    simplify(remove.multiple = F, remove.loops = TRUE) %>%
    get.adjacency * 1 / 2
}

Results 2

   expr        min         lq       mean     median         uq       max neval  cld
 fn1(DT) 292.755855 295.047878 301.545026 295.890292 297.364117 382.01720   100   c 
 fn2(DT) 349.139294 351.886946 371.612651 353.392465 394.686377 528.48418   100    d
 fn3(DT)  10.075716  10.500732  15.642757  10.767010  11.379872  79.36882   100 a   
 fn4(DT)   7.382669   7.968354   8.494499   8.204351   8.585933  18.17826   100 a   
 fn5(DT) 126.307694 134.317938 152.548209 135.883273 177.473529 210.14054   100  b  
 fn6(DT)   8.540844   9.119288   9.833154   9.637090  10.055865  18.84172   100 a  
Mario GS
  • 859
  • 8
  • 22
  • `fn6 <- function(DF) { full_join(DF, DF, by = 'A') %>% filter(B.x != B.y) }` performs very similar to the data.table solution (at least for this size). Using `setkey(DT, A)` may improve performance for `fn4`. – Axeman Mar 13 '17 at 13:48
  • Thanks, I will update the post in the evening. I think fn4 is performing better with `setkey`. This will be more evident with bigger datasets. – Mario GS Mar 13 '17 at 15:47
  • @Axeman, I was testing the outcomes, by using the `identical_graphs` function of `igraph` and they don't match. If I apply `simplify` and the I run `isomorphic`, in small graphs I get `TRUE`, but doesn't hold for larger graphs, do you have any idea why? – Mario GS Mar 13 '17 at 20:41
  • @Axeman, the merge method is duplicating the connections, this is reflected in the weights. – Mario GS Mar 13 '17 at 21:21

0 Answers0