I'm wondering if there's a way to process knitr code chunks in an RMarkdown document asynchronously.
Here's what I have in mind: imagine a document containing a complex data analysis broken down into several code chunks, several of which contain code that is complex and therefore slow to run. These code chunks don't have any dependency on each other, and their output is either a plot, a table, or some other numerical result, but not any data object that is used by any other code chunk.
It would be great if I could parallelise the processing of these code chunks. Typically, knitr processes each code chunk sequentially, consequently, it might be that there are several code chunks that are held-up in a queue behind a code chunk that is slow to process. R packages like future and promises enable asynchronous programming, and I was wondering if perhaps this could be leveraged to process knitr code chunks in parallel in the same way. I'm aware that I can likely put the slow code chunks in separate Rmd files and then in a code chunk call knitr::knit_child
in a future::future_map
call, but it would be nicer to keep everything together in the same file. I'm also aware that is possible to specify child documents using the child
option in a code chunk. Moreover, I'm aware that I can reuse code chunks by calling them by name using the ref.label
option. So what I'm wondering is whether there's any way to hijack any of this functionality (probably using future) so that there is delayed rendering of code chunk output while execution passes to subsequent code chunks. Or something like that. Just exploring the space of possibilities for moving beyond sequential computation of code chunks in knitr and using multicore or multisession processing.