Using data.table
, I'd do this directly as follows:
require(data.table) # v1.9.6+
dt1[dt2, ratio := unlist(mget(age.cat)), by=.EACHI, on="cntry"]
where,
dt1 = as.data.table(survey)[, age.cat := as.character(age.cat)]
dt2 = as.data.table(age.cat)
For each row in dt2
, the matching rows in dt1$cntry
are found corresponding to dt2$cntry
(it helps to think of it like a subset operation by matching on cntry
column). age.cat
values for those matching rows are extracted and passed to mget()
function, that looks for variables named with the values in age.cat
, and finds it in dt2
(we allow for columns in dt2
to be also visible for exactly this purpose), and extracts the corresponding values. Since it returns a list
, we unlist
it. Those values are assigned to the column ratio
by reference.
Since this avoids unnecessary materialising of intermediate data by melting/gathering, it is quite efficient. Additionally, since it adds a new column by reference while joining, it avoids another intermediate materialisation and is doubly efficient.
Personally, I find the code much more straightforward to understand as to what's going on (with sufficient base R knowledge of course), but that is of course subjective.
Slightly more detailed explanation:
The general form of data.table syntax is DT[i, j, by]
which reads:
Take DT
, subset rows by i
, then compute j
grouped by by
.
The i
argument in data.table, in addition to being subset operations e.g., dt1[cntry == "FR"]
, can also be another data.table.
Consider the expression: dt1[dt2, on="cntry"]
.
The first thing it does is to compute, for each row in dt2
, all matching row indices in dt1
by matching on the column provided in on
= "cntry"
. For example, for dt2$cntry == "FR"
, the matching row indices in dt1
are c(1,4,5,10)
. These row indices are internally computed using fast binary search.
Once the matching row indices are computed it looks as to whether an expression is provided in the j
argument. In the above expression j
is empty. Therefore it returns all the columns from both dt1
and dt2
(leading to a right join).
In other words, data.table allows join operations to be performed in a similar fashion to subsets (because in both operations, the purpose of i
argument is to obtain matching rows). For example, dt1[cntry == "FR"]
would first compute the matching row indices, and then extract all columns for those rows (since no columns are provided in the j
argument). This has several advantages. For example, if we would only like to return a subset of columns, then we can do, for example:
dt1[dt2, .(cntry, Y_less.15), on="cntry"]
This is efficient because we look at the j
expression and notice that only those two columns are required. Therefore on the computed row indices, we only extract the required columns thereby avoiding unnecessary materialisation of all the other columns. Hence efficient
Also, just like how we can select columns, we can also compute on columns. For example, what if you'd like to get sum(Y_less.15)
?
dt1[dt2, sum(Y_less.15), on="cntry"]
# [1] 2.3
This is great, but it computes the sum on all the matching rows. What if you'd like to get the sum
for each row in dt2$cntry
? This is where by = .EACHI
comes in.
dt1[dt2, sum(Y_less.15), on="cntry", by=.EACHI]
# cntry V1
# 1: FR 0.2
# 2: UK 0.2
# 3: DE 0.3
by=.EACHI
ensures that the j
expression is evaluated for each row in i = dt2
.
Similarly, we can also add/update columns while joining using the :=
operator. And that's the answer shown above. The only tricky part there is to extract the values for those matching rows from dt2
, since they are stored in separate columns. Hence we use mget()
. And the expression unlist(mget(.))
gets evaluated for each row in dt2
while matching on "cntry"
column. And the corresponding values are assigned to ratio
by using the :=
operator.
For more details on history of :=
operator see this, this and this post on SO.
For more on by=.EACHI
, see this post.
For more on data.table syntax introduction and reference semantics, see the vignettes.
Hope this helps.