5

I have two sets of data.

Sample of set_A (total number of rows: 45467):

ID_a    a1  a2  a3  time_a
2   35694   5245.2  301.6053    00.00944
3   85694   9278.9  301.6051    23.00972
4   65694   9375.2  301.6049    22.00972
5   85653   4375.5  301.6047    19.00972
6   12694   5236.3  301.6045    22.00972
7   85697   5345.2  301.6043    21.00972
8   85640   5274.1  301.6041    20.01000
9   30694   5279.0  301.6039    20.01000

Sample of set_B (total number of rows: 4798):

ID_b    b1  b2  source  time_b
2   34.20   15.114  set1.csv.1  20.35750
7   67.20   16.114  set1.csv.2  21.35778
12  12.20   33.114  set1.csv.3  22.35806
17  73.20   67.114  set2.csv.1  23.35833
23  88.20   42.114  set2.csv.2  19.35861
28  90.20   52.114  set3.csv.1  00.35889

I am interested in a result where to the set_B, rows from set_A are matched by the nearest values of time_a and time_b (total number of output rows: 4798). In set_A values of time_a could be repeated several times (eg. ID_a[8,] and [ID_a[9,]) - it doesn't really matter which row will be merged with a row from set_B (in this case ID_b[1,]). Example of the expected result:

ID_b    b1  b2  source  time_b      ID_a    a1  a2  a3  time_a
2   34.20   15.114  set1.csv.1  20.35750    8   85640   5274.1  301.6041    20.01000
7   67.20   16.114  set1.csv.2  21.35778    7   85697   5345.2  301.6043    21.00972
12  12.20   33.114  set1.csv.3  22.35806    4   65694   9375.2  301.6049    22.00972
17  73.20   67.114  set2.csv.1  23.35833    3   85694   9278.9  301.6051    23.00972
23  88.20   42.114  set2.csv.2  19.35861    5   85653   4375.5  301.6047    19.00972
28  90.20   52.114  set3.csv.1  00.35889    2   35694   5245.2  301.6053    00.00944

I went through many similar questions on stackoverflow and I really like data.table library codes as they look really elegant. However, I made several failed attempts where I received either a table constructed based on two sets (total no. of rows 45467) or merged only one column time_a to the set_B... Nevertheless, I won't be picky and if somebody has another idea I would be very grateful for help.

Example of a code on which I'm working on:

setDT(set_B)
setDT(set_A)
setkey(set_B, time_b) [, time_a:=time_b]
test_ab <- set_B[set_A, roll='nearest']

As a result I am receiving not only a table with data which should be neglaced but also "a mess" in column names (e.g. a column which contains ID_a values is called time_a).

I would really appricieate your help!

barbrka
  • 153
  • 1
  • 11

1 Answers1

7

Here is a step-by-step example based on the sample data you give:

# Sample data
library(data.table)
setDT(set_A)
setDT(set_B)    

# Create time column by which to do a rolling join
set_A[, time := time_a]
set_B[, time := time_b]
setkey(set_A, time)
setkey(set_B, time)

# Rolling join by nearest time
set_merged <- set_B[set_A, roll = "nearest"]

unique(set_merged[order(ID_b)], by = "time")
#    ID_b   b1     b2     source   time_b     time ID_a    a1     a2       a3
# 1:    2 34.2 15.114 set1.csv.1 20.35750 20.01000    8 85640 5274.1 301.6041
# 2:    7 67.2 16.114 set1.csv.2 21.35778 21.00972    7 85697 5345.2 301.6043
# 3:   12 12.2 33.114 set1.csv.3 22.35806 22.00972    4 65694 9375.2 301.6049
# 4:   17 73.2 67.114 set2.csv.1 23.35833 23.00972    3 85694 9278.9 301.6051
# 5:   23 88.2 42.114 set2.csv.2 19.35861 19.00972    5 85653 4375.5 301.6047
# 6:   28 90.2 52.114 set3.csv.1  0.35889  0.00944    2 35694 5245.2 301.6053
#      time_a
# 1: 20.01000
# 2: 21.00972
# 3: 22.00972
# 4: 23.00972
# 5: 19.00972
# 6:  0.00944

Two comments:

  1. We create a new time column to avoid losing one of the original time columns from set_A and set_B. You can always remove the time column after the join if required.
  2. We use unique to remove duplicated time rows by order of ID_b. You mention in your post that "it doesn't really matter which row will be merged" but in case that you do want to retain specific rows, you might need to adjust this line of code.

Update (thanks to @Henrik)

As @Henrik pointed out, what you're after is actually a rolling join of set_A with respect to set_B, in which case you don't need to deal with the duplicate rows.

That translates to

library(data.table)
setDT(set_A)
setDT(set_B)    

# Create time column by which to do a rolling join
set_A[, time := time_a]
set_B[, time := time_b]

set_A[set_B, on = "time", roll = "nearest"][order(ID_a)]
#   ID_a    a1     a2       a3   time_a     time ID_b   b1     b2     source
#1:    2 35694 5245.2 301.6053  0.00944  0.35889   28 90.2 52.114 set3.csv.1
#2:    3 85694 9278.9 301.6051 23.00972 23.35833   17 73.2 67.114 set2.csv.1
#3:    5 85653 4375.5 301.6047 19.00972 19.35861   23 88.2 42.114 set2.csv.2
#4:    6 12694 5236.3 301.6045 22.00972 22.35806   12 12.2 33.114 set1.csv.3
#5:    7 85697 5345.2 301.6043 21.00972 21.35778    7 67.2 16.114 set1.csv.2
#6:    9 30694 5279.0 301.6039 20.01000 20.35750    2 34.2 15.114 set1.csv.1
#  time_b
#1:  0.35889
#2: 23.35833
#3: 19.35861
#4: 22.35806
#5: 21.35778
#6: 20.35750

Sample data

set_A <- read.table(text =
    "ID_a    a1  a2  a3  time_a
2   35694   5245.2  301.6053    00.00944
3   85694   9278.9  301.6051    23.00972
4   65694   9375.2  301.6049    22.00972
5   85653   4375.5  301.6047    19.00972
6   12694   5236.3  301.6045    22.00972
7   85697   5345.2  301.6043    21.00972
8   85640   5274.1  301.6041    20.01000
9   30694   5279.0  301.6039    20.01000", header = T)

set_B <- read.table(text =
    "ID_b    b1  b2  source  time_b
2   34.20   15.114  set1.csv.1  20.35750
7   67.20   16.114  set1.csv.2  21.35778
12  12.20   33.114  set1.csv.3  22.35806
17  73.20   67.114  set2.csv.1  23.35833
23  88.20   42.114  set2.csv.2  19.35861
28  90.20   52.114  set3.csv.1  00.35889", header = T)
Maurits Evers
  • 49,617
  • 4
  • 47
  • 68
  • Thanks a lot for the code and simple clear explanation, it helped a lot! I've just made one change in line: `unique(set_merged[order(ID_b)], by = "time")` Instead, I've written `unique(set_merged[order(ID_b)], by = "ID_b")` as some values of time in `set_B` were duplicated as well (`ID_b` is unique). – barbrka Jan 03 '19 at 14:06
  • 1
    Don't you want to use "set_B" in `i` to lookup rows of "set_A" in `x`? `set_A[set_B, on="time", roll = "nearest"]`. Then you don't need the `unique` step. Furthermore, if you use `on`, the two `setkey` steps are not needed. Also, `setDT` updates by reference, so the assignment (`<-`) is redundant. Note that due to the duplicated times in 'set_A', some matches differ. – Henrik Jan 03 '19 at 17:30
  • 1
    small addition: if you use `fread` in stead of `read.table`, the result of the read is already a `data.table`. It also is (or seems to be) better practice to use `TRUE` in stead if `T`. – Wimpel Jan 04 '19 at 09:13
  • @Henrik From the original post (and from what I understand) it seems OP wants `set_B[set_A, ...]`. Thanks for pointing out the update-by-reference bit though, I've made a change. – Maurits Evers Jan 04 '19 at 09:13
  • Hi again @Maurits Evers! `set_B[set_A, ...]` is an outer join **with respect to 'set_A'**, i.e. the result contains all rows of 'set_A' (hence the need for your additional step). However, because the expected answer in OP contains the rows from 'set_B', my interpretation was that OP needed `set_A[set_B, ...]`, i.e. we use 'set_B' to lookup rows of 'set_A'. Well, well, OP is happy, so I suppose everything is fine then ;) Cheers. – Henrik Jan 04 '19 at 10:07
  • Ah I see @Henrik; thanks for the clarification. I'm not too familiar with the `data.table` syntax yet. I've made an edit. – Maurits Evers Jan 04 '19 at 11:03
  • 1
    I think the small examples in this question (yes, in the actual question) is very nice to get a better feeling for the `data.table` join types: [Why does XY join of data.tables not allow a full outer join, or a left join?](https://stackoverflow.com/questions/12773822/why-does-xy-join-of-data-tables-not-allow-a-full-outer-join-or-a-left-join?noredirect=1&lq=1). – Henrik Jan 04 '19 at 11:31
  • 4
    Personally, I like to use the `on` argument, rather than setting keys. In `x[i, on = , ...]` it's explicit what you are joining on. Some nice examples with `on` [here](https://stackoverflow.com/questions/1299871/how-to-join-merge-data-frames-inner-outer-left-right/34219998#34219998). Cheers – Henrik Jan 04 '19 at 11:32
  • Hi everyon, I have made a similar question to this one, where I would like to merge two DF by multiple columns (instead of one only) following the method of the nearest values. I think it could be interesting for you too: https://stackoverflow.com/questions/70455623/merging-two-data-sets-by-multiple-columns-and-by-data-table-roll-nearest-funct – Strobila Dec 24 '21 at 10:34