I have the following dataframe and my intention is to find all the IDs, that have different USAGE but the same TYPE.
ID <- rep(1:4, each=3)
USAGE <- c("private","private","private","private",
"taxi","private","taxi","taxi","taxi","taxi","private","taxi")
TYPE <- c("VW","VW","VW","VW","MER","VW","VW","VW","VW","VW","VW","VW")
df <- data.frame(ID,USAGE,TYPE)
If I run
df %>% group_by(ID, TYPE) %>% filter(n_distinct(USAGE)>1)
I get the intended result. But my original dataframe has >2 mln rows. So I would like to use all my cores in running this operation.
I tried this code with multidplyr:
f1 <- partition(df, ID)
f2 <- f1 %>% group_by(ID, TYPE) %>% filter(n_distinct(USAGE)>1)
f3 <- collect(f2)
But then the following message appears:
Warning message: group_indices_.grouped_df ignores extra arguments
after
f1 <- partition(df, ID)
and
Error in checkForRemoteErrors(lapply(cl, recvResult)) :
4 nodes produced errors; first error: Evaluation error: object 'f1' not found.
after
f2 <- f1%>% group_by(ID, TYPE) %>% filter(f1, n_distinct(USAGE)>1)
What would be the correct way to implement the whole operation into multidplyr? Thanks a lot.