-1

Using a reference table, Im traying to re-locate the data from several columns to rows keeping some important fields (other columns).

Name    Amplitude    A           B        
M2      3.264         29.0       28.98 
S2      0.781         51.9       30.0  
N2      0.63          12.3       28.43 
K1      1.263        136.8       15.04 
M4      0.043        286.0       57.96 

I got a final results like this:

Name    Amplitude    Value  Code        
M2      3.264         29.0  A    
S2      0.781         51.9  A     
N2      0.63          12.3  A    
K1      1.263        136.8  A    
M4      0.043        286.0  B
M2      3.264        28.98  B
S2      0.781         30.0  B
N2      0.63         15.04  B
K1      1.263        57.96  B   

This just an example I have more columns from Amplitude to A in the first table. I use the following code:

Final<-NULL

colname<-colnames(ReferenceAll)

for (i in (1:nrow(ReferenceAll))){
  for (j in (1:ncol(ReferenceAll))){
    if (j>2) { # the number is from the column I want to get in the results

  temp<-as.data.frame(rbind(cbind(Name=ReferenceAll[i,1],
                                  Amplitude=as.character.factor(ReferenceAll[i,2]),
                                  Value=ReferenceAll[i,j],
                                  Code=colname[j])))
  Final<-rbind(Final,temp)}}}

When I have few rows it takes miliseconds but when I have more than 100 rows it takes hours. Can any one help me?

FJ Hugs
  • 1
  • 1
  • 2
    `library(reshape2);melt(df1, id.vars=c("Name","Amplitude"), value.name="Value", variable.name="Code")` – RHertel Mar 21 '16 at 15:29
  • 1
    base R: `reshape(df, varying = c("A","B"), direction = "long", v.names = c("Value"), times = c("A","B"))` – Zelazny7 Mar 21 '16 at 15:55

1 Answers1

3

We can use melt from data.table. It should be fast compared to the for loop.

library(data.table)
melt(setDT(df1), id.var=1:2, value.name="Value", variable.name="Code")
akrun
  • 874,273
  • 37
  • 540
  • 662