I do have a statistical routine that does not like row exact duplicates (without ID) as resulting into null distances.
So I first detect duplicates which I remove, apply my routines and merge back records left aside.
For simplicity, consider I use rownames
as ID/key.
I have found following way to achieve my result in base R:
data <- data.frame(x=c(1,1,1,2,2,3),y=c(1,1,1,4,4,3))
# check duplicates and get their ID -- cf. https://stackoverflow.com/questions/12495345/find-indices-of-duplicated-rows
dup1 <- duplicated(data)
dupID <- rownames(data)[dup1 | duplicated(data[nrow(data):1, ])[nrow(data):1]]
# keep only those records that do have duplicates to preveng running folowing steps on all rows
datadup <- data[dupID,]
# "hash" row
rowhash <- apply(datadup, 1, paste, collapse="_")
idmaps <- split(rownames(datadup),rowhash)
idmaptable <- do.call("rbind",lapply(idmaps,function(vec)data.frame(mappedid=vec[1],otherids=vec[-1],stringsAsFactors = FALSE)))
Which gives me what I want, ie deduplicated data (easy) and mapping table.
> (data <- data[!dup1,])
x y
1 1 1
4 2 4
6 3 3
> idmaptable
mappedid otherids
1_1.1 1 2
1_1.2 1 3
2_4 4 5
I wonder whether there is a simpler or more effective method (data.table
/ dplyr
accepted). Any alternative to propose?