I would like to perform some computation on a large dataframe. To do so I need to
- split my dataframe into n equal parts (the last chunk will have its own size)
- Do my computation (I add the result in a new column)
- Recombine the dataframe
How can I do that?
Many thanks in advance for your help!
dataframe <- MyDataFrame
nb_obs <- nrow(dataframe) # in my dataframe I have 153 036 rows
nb_chunk <- ceiling(dataframe / 250) # I thus need 613 chunks if I want 250 obs per sub-dataframe
for(i in 1:nb_chunk) {
# my computation here, I want to add a new columns to the chunk to store my results..
}
# then I want to recombine the final dataset (equals to original dataset with a new column added)
EDITED PART
Thank you for your comments, please find below my proposal in a reproducible example using iris dataset.
I have 2 additional questions at this stage:
- Can I initiate 'final.df' without hard coding the column names?
Is there a better way to proceed with dplyr?
df <- iris # using iris as an example (my real dataframe is 153036 rows and 17 columns) nb_obs <- nrow(df) # nb of observations in the dataframe (thus nb of operations to be performed) nb_obs_in_chunk <- 13 # nb of rows per chunk nb_chunk <- ceiling(nb_obs / nb_obs_in_chunk) # total nb of chunks to be created nb_chunk_full <- floor(nb_obs / nb_obs_in_chunk) # nb of chunks to be created with nb_obs_in_chunk rows nb_obs_last_chunk <- nb_obs - nb_obs_in_chunk*nb_chunk_full # nb of rows in final chunks df$split.factor <- as.factor(c(rep(1:nb_chunk_full, each = nb_obs_in_chunk), rep(nb_chunk_full + 1, nb_obs_last_chunk))) # create factor to split dataframe into equal parts final.df <- data.frame(Sepal.Length = numeric(), Sepal.Width = numeric(), Petal.Length = numeric(), Petal.Width = numeric(), Species = factor(), split.factor = factor()) # initiate final dataframe (desired output) for(i in 1:nb_chunk) { temp_i <- df[df$split.factor == i, ] temp_i$NEW <- temp_i$Sepal.Length + temp_i$Sepal.Width final.df <- rbind(final.df, temp_i) }