I have a fairly simple task to do but I can't finish it because of memory problems. So I am wondering if there is a more efficient way to do this. I have a big data.frame that loos like this:
this data frame is called sp_df and I calculate the distance of each point from each other in R. The problem is that because of its big size, I can't melt it to a matrix. And that's where I am blocked.
sp_df <- read.csv("Euclidean_80K_Spots.csv", h=T)
sp_dist <- dist(sp_df)
sp_dist_m <- melt(as.matrix(sp_dt), varnames = c("ID", "neig"))
The dist object is 16.2 Gb, I cannot split the data in little chunks because of its biology. All I could do its filter dist and keep only dist < 2 but I don't know how to do that. Any help will be very appreciated! Thanks !