I truly know that 'large matrix issue' is a recurrent topic here, but I would like to explain minutely my specific problem regarding large matrices.
Strictly speaking, I want to cbind
several large matrices with a specific name pattern in R. The below code shows my best try until this point.
First lets produce files to mimetize my real matrices:
# The df1
df1 <- '######## infx infx infx
######## infx infx infx
probeset_id sample1 sample2 sample3
PR01 1 2 0
PR02 -1 2 0
PR03 2 1 1
PR04 1 2 1
PR05 2 0 1'
df1 <- read.table(text=df1, header=T, skip=2)
write.table(df1, "df1.txt", col.names=T, row.names=F, quote=F, sep="\t")
# The df2
df2 <- '######## infx infx infx
######## infx infx infx
probeset_id sample4 sample5 sample6
PR01 2 2 1
PR02 2 -1 0
PR03 2 1 1
PR04 1 2 1
PR05 0 0 1'
df2 <- read.table(text=df2, header=T, skip=2)
write.table(df2, "df2.txt", col.names=T, row.names=F, quote=F, sep="\t")
# The dfn
dfn <- '######## infx infx infx
######## infx infx infx
probeset_id samplen1 samplen2 samplen3
PR01 2 -1 1
PR02 1 -1 0
PR03 2 1 1
PR04 1 2 -1
PR05 0 2 1'
dfn <- read.table(text=dfn, header=T, skip=2)
write.table(dfn, "dfn.txt", col.names=T, row.names=F, quote=F, sep="\t")
Then import it to R and write as my expected output
file:
### Importing and excluding duplicated 'probeset_id' column
calls = list.files(pattern="*.txt")
library(data.table)
calls = lapply(calls, fread, header=T)
mycalls <- as.data.frame(calls)
probenc <- as.data.frame(mycalls[,1])
mycalls <- mycalls[, -grep("probe", colnames(mycalls))]
output <- cbind(probenc, mycalls)
names(output)[1] <- "probeset_id"
write.table(output, "output.txt", col.names=T, row.names=F, quote=F, sep="\t")
How the output looks like:
> head(output)
probeset_id sample1 sample2 sample3 sample4 sample5 sample6 samplen1 samplen2 samplen3
1 PR01 1 2 0 2 2 1 2 -1 1
2 PR02 -1 2 0 2 -1 0 1 -1 0
3 PR03 2 1 1 2 1 1 2 1 1
4 PR04 1 2 1 1 2 1 1 2 -1
5 PR05 2 0 1 0 0 1 0 2 1
This code works perfectly for what I want to do, however, I face the known R memory limitation using my real data (more than 30 "df
" objects with ~1.3GB or/and 600k rows by 100 columns each).
I read about a very interesting SQL approach (R: how to rbind two huge data-frames without running out of memory) but I am inexperienced in SQL and did not found a way to adaptate it to my case.
Cheers,