189

Does anyone have any wisdom on workflows for data analysis related to custom report writing? The use-case is basically this:

  1. Client commissions a report that uses data analysis, e.g. a population estimate and related maps for a water district.

  2. The analyst downloads some data, munges the data and saves the result (e.g. adding a column for population per unit, or subsetting the data based on district boundaries).

  3. The analyst analyzes the data created in (2), gets close to her goal, but sees that needs more data and so goes back to (1).

  4. Rinse repeat until the tables and graphics meet QA/QC and satisfy the client.

  5. Write report incorporating tables and graphics.

  6. Next year, the happy client comes back and wants an update. This should be as simple as updating the upstream data by a new download (e.g. get the building permits from the last year), and pressing a "RECALCULATE" button, unless specifications change.

At the moment, I just start a directory and ad-hoc it the best I can. I would like a more systematic approach, so I am hoping someone has figured this out... I use a mix of spreadsheets, SQL, ARCGIS, R, and Unix tools.

Thanks!

PS:

Below is a basic Makefile that checks for dependencies on various intermediate datasets (w/ .RData suffix) and scripts (.R suffix). Make uses timestamps to check dependencies, so if you touch ss07por.csv, it will see that this file is newer than all the files / targets that depend on it, and execute the given scripts in order to update them accordingly. This is still a work in progress, including a step for putting into SQL database, and a step for a templating language like sweave. Note that Make relies on tabs in its syntax, so read the manual before cutting and pasting. Enjoy and give feedback!

http://www.gnu.org/software/make/manual/html_node/index.html#Top

R=/home/wsprague/R-2.9.2/bin/R

persondata.RData : ImportData.R ../../DATA/ss07por.csv Functions.R
   $R --slave -f ImportData.R

persondata.Munged.RData : MungeData.R persondata.RData Functions.R
      $R --slave -f MungeData.R

report.txt:  TabulateAndGraph.R persondata.Munged.RData Functions.R
      $R --slave -f TabulateAndGraph.R > report.txt

pnuts
  • 58,317
  • 11
  • 87
  • 139
forkandwait
  • 5,041
  • 7
  • 23
  • 22
  • 14
    Oh my. **those who enter here, beware**: the answers on this question were excellent five years ago. They are now *all* completely outdated. Nowadays, I would advise strongly against following any of the answers here. There are now much better tools available. As a start, I’ll refer to [an example project using Makefiles and Knitr](https://github.com/klmr/example-r-analysis). – Konrad Rudolph Sep 11 '16 at 18:00
  • [R Notebooks](https://rmarkdown.rstudio.com/r_notebooks.html), [odbc drivers](https://db.rstudio.com/odbc/), [git](https://git-scm.com/) and [git lfs](https://git-lfs.github.com/) are all heaven sent for this problem. – DaveRGP May 21 '18 at 09:15
  • 2
    I would strongly recommend setting up the project according to the principles outlined e.g here(https://github.com/ropensci/rrrpkg). The so-call "research compedium" is godsend when doing reproducible data science – Kresten Apr 12 '19 at 11:51

14 Answers14

200

I generally break my projects into 4 pieces:

  1. load.R
  2. clean.R
  3. func.R
  4. do.R

load.R: Takes care of loading in all the data required. Typically this is a short file, reading in data from files, URLs and/or ODBC. Depending on the project at this point I'll either write out the workspace using save() or just keep things in memory for the next step.

clean.R: This is where all the ugly stuff lives - taking care of missing values, merging data frames, handling outliers.

func.R: Contains all of the functions needed to perform the actual analysis. source()'ing this file should have no side effects other than loading up the function definitions. This means that you can modify this file and reload it without having to go back an repeat steps 1 & 2 which can take a long time to run for large data sets.

do.R: Calls the functions defined in func.R to perform the analysis and produce charts and tables.

The main motivation for this set up is for working with large data whereby you don't want to have to reload the data each time you make a change to a subsequent step. Also, keeping my code compartmentalized like this means I can come back to a long forgotten project and quickly read load.R and work out what data I need to update, and then look at do.R to work out what analysis was performed.

Josh Reich
  • 6,477
  • 5
  • 28
  • 26
  • 12
    That's a really good workflow. I have struggled with designing a workflow and when I ask those around me they generally respond, "what? workflow? huh?" So I take it they don't think about this much. I'm going to adopt the Reichian LCFD model. – JD Long Sep 16 '09 at 19:26
  • 1
    this is pretty close to my workflow, I have often an import script, analysis script and reporting script – kpierce8 Oct 10 '09 at 03:50
  • 4
    LCFD: Least Commonly Fouled-up Data – William Doane Oct 13 '09 at 16:28
  • 2
    There is a nice presentation video + slides by Jeromy Anglim that incorporates this workflow here http://www.vcasmo.com/video/drewconway/10362 – David LeBauer Dec 07 '10 at 17:07
  • If the successive tasks are separate processes, don't you *have* to use save() to persist the data between (for instance) load.R and clean.R? Or is there a faster way to do this? The best I can think of is writing it to "/dev/shm/" – pufferfish Jul 04 '11 at 16:37
  • That's nice, I'll experiment with this workflow next time I work on a project. I used to put everything on one script and separate those phases with those comment boxes, but navigating through the phases was still quite annoying when the script reached some respectable size. – Waldir Leoncio Mar 04 '12 at 12:48
  • `do.R` can get pretty nasty as well. Definitely good to look for places where things can be split out, so that you don't need to do your whole analysis just to re-calculate a single graph. I also find it useful to have a `plot_functions.R`, which includes all the plot code, inside a function for each plot. Then you can just open a graphics device, run the plot, and don't have to worry about anything else. @pufferfish: You can have save the output of `clean.R` at the end, and then check that the saved file exists and load it, instead of running `clean.R` again. – naught101 Jul 20 '12 at 06:57
  • @David the link is broken. Could you provide a working one? Thanks – Simone May 28 '13 at 01:09
  • 2
    @Simone here it is: http://files.meetup.com/1685538/Rmeetup_Workflow_fullscreen.pdf – David LeBauer May 28 '13 at 01:16
  • @pufferfish not if you use `source`, e.g. `source("load.R"); source("clean.R")` – David LeBauer May 28 '13 at 01:19
  • I found that Rob Hyndman's workflow of "main.R", "functions.R", etc. to be interesting. It points to this answer as a starting point: http://robjhyndman.com/hyndsight/workflow-in-r/ – Jeff Moser Dec 18 '13 at 14:48
96

If you'd like to see some examples, I have a few small (and not so small) data cleaning and analysis projects available online. In most, you'll find a script to download the data, one to clean it up, and a few to do exploration and analysis:

Recently I have started numbering the scripts, so it's completely obvious in which order they should be run. (If I'm feeling really fancy I'll sometimes make it so that the exploration script will call the cleaning script which in turn calls the download script, each doing the minimal work necessary - usually by checking for the presence of output files with file.exists. However, most times this seems like overkill).

I use git for all my projects (a source code management system) so its easy to collaborate with others, see what is changing and easily roll back to previous versions.

If I do a formal report, I usually keep R and latex separate, but I always make sure that I can source my R code to produce all the code and output that I need for the report. For the sorts of reports that I do, I find this easier and cleaner than working with latex.

hadley
  • 102,019
  • 32
  • 183
  • 245
  • I commented about Makefiles above, but you might want to look into them -- it is the traditional dependency checking language. Also -- I am going to try to learn ggplot2 -- looks great! – forkandwait Sep 18 '09 at 05:18
  • I like the idea of having a way to specify dependencies between files, but having to learn m4 is a big turn off. I wish there was something like raken written in R. – hadley Sep 18 '09 at 14:35
  • 2
    For dependencies, you can also do it within the R files. Instead of doing `source("blah.R")`, check if the required variable(s) exist first: `if (!exists("foo")) { source("blah.R") }`. That avoids re-running dependencies if they've already run. – naught101 Apr 03 '13 at 02:16
17

I agree with the other responders: Sweave is excellent for report writing with R. And rebuilding the report with updated results is as simple as re-calling the Sweave function. It's completely self-contained, including all the analysis, data, etc. And you can version control the whole file.

I use the StatET plugin for Eclipse for developing the reports, and Sweave is integrated (Eclipse recognizes latex formating, etc). On Windows, it's easy to use MikTEX.

I would also add, that you can create beautiful reports with Beamer. Creating a normal report is just as simple. I included an example below that pulls data from Yahoo! and creates a chart and a table (using quantmod). You can build this report like so:

Sweave(file = "test.Rnw")

Here's the Beamer document itself:

% 
\documentclass[compress]{beamer}
\usepackage{Sweave}
\usetheme{PaloAlto} 
\begin{document}

\title{test report}
\author{john doe}
\date{September 3, 2009} 

\maketitle

\begin{frame}[fragile]\frametitle{Page 1: chart}

<<echo=FALSE,fig=TRUE,height=4, width=7>>=
library(quantmod)
getSymbols("PFE", from="2009-06-01")
chartSeries(PFE)
@

\end{frame}


\begin{frame}[fragile]\frametitle{Page 2: table}

<<echo=FALSE,results=tex>>=
library(xtable)
xtable(PFE[1:10,1:4], caption = "PFE")
@

\end{frame}

\end{document}
Shane
  • 98,550
  • 35
  • 224
  • 217
  • 6
    Don't believe that an Sweave report is reproducible until you test it on a clean machine. It's easy to have implicit external dependencies. – John D. Cook Sep 16 '09 at 13:54
17

I just wanted to add, in case anyone missed it, that there's a great post on the learnr blog about creating repetitive reports with Jeffrey Horner's brew package. Matt and Kevin both mentioned brew above. I haven't actually used it much myself.

The entries follows a nice workflow, so it's well worth a read:

  1. Prepare the data.
  2. Prepare the report template.
  3. Produce the report.

Actually producing the report once the first two steps are complete is very simple:

library(tools)
library(brew)
brew("population.brew", "population.tex")
texi2dvi("population.tex", pdf = TRUE)
John
  • 41,131
  • 31
  • 82
  • 106
Shane
  • 98,550
  • 35
  • 224
  • 217
  • In fixing a small grammatical error I messed up the wordpress.com addressing. So the correct link is http://learnr.wordpress.com/2009/09/09/brew-creating-repetitive-reports/ – learnr Sep 24 '09 at 19:51
15

For creating custom reports, I've found it useful to incorporate many of the existing tips suggested here.

Generating reports: A good strategy for generating reports involves the combination of Sweave, make, and R.

Editor: Good editors for preparing Sweave documents include:

  • StatET and Eclipse
  • Emacs and ESS
  • Vim and Vim-R
  • R Studio

Code organisation: In terms of code organisation, I find two strategies useful:

Community
  • 1
  • 1
Jeromy Anglim
  • 33,939
  • 30
  • 115
  • 173
7

I use Sweave for the report-producing side of this, but I've also been hearing about the brew package - though I haven't yet looked into it.

Essentially, I have a number of surveys for which I produce summary statistics. Same surveys, same reports every time. I built a Sweave template for the reports (which takes a bit of work). But once the work is done, I have a separate R script that lets me point out the new data. I press "Go", Sweave dumps out a few score .tex files, and I run a little Python script to pdflatex them all. My predecessor spent ~6 weeks each year on these reports; I spend about 3 days (mostly on cleaning data; escape characters are hazardous).

It's very possible that there are better approaches now, but if you do decide to go this route, let me know - I've been meaning to put up some of my Sweave hacks, and that would be a good kick in the pants to do so.

Matt Parker
  • 26,709
  • 7
  • 54
  • 72
7

I'm going to suggest something in a different sort of direction from the other submitters, based on the fact that you asked specifically about project workflow, rather than tools. Assuming you're relatively happy with your document-production model, it sounds like your challenges really may be centered more around issues of version tracking, asset management, and review/publishing process.

If that sounds correct, I would suggest looking into an integrated ticketing/source management/documentation tool like Redmine. Keeping related project artifacts such as pending tasks, discussion threads, and versioned data/code files together can be a great help even for projects well outside the traditional "programming" bailiwick.

rcoder
  • 12,229
  • 2
  • 23
  • 19
5

Agreed that Sweave is the way to go, with xtable for generating LaTeX tables. Although I haven't spent too much time working with them, the recently released tikzDevice package looks really promising, particularly when coupled with pgfSweave (which, as far as I know is only available on rforge.net at this time -- there is a link to r-forge from there, but it's not responding for me at the moment).

Between the two, you'll get consistent formatting between text and figures (fonts, etc.). With brew, these might constitute the holy grail of report generation.

kmm
  • 6,045
  • 7
  • 43
  • 53
  • pgfSweave is currently in "development limbo" as the developers haven't had time to incorporate the new tikzDevice. For now we suggest using tikzDevice from within normal Sweave documents-- the user just has to take responsibility for opening/closing the device and \including{} the resulting output. – Sharpie Sep 16 '09 at 18:58
  • @Sharpie: Any updates on the development status of pgfSweave? It looks great, but doesn't seem to work on any system I've tried. – Ari B. Friedman Mar 30 '11 at 03:20
  • @gsk3 The other developer has been very active in keeping pgfSweave updated and has done a lot of work since I posted that comment. Head over to http://github.com/cameronbracken/pgfSweave to track development. If the package is not working for you, we would love to get a bug report so we can get it fixed. – Sharpie Mar 31 '11 at 17:54
  • @Sharpie: Great, thanks. I forwarded your message to my friend who's put more work in on it than I have. If he doesn't file a bug report soon then I'll get one together. It looks like a great package; thanks for all the hard work. – Ari B. Friedman Apr 01 '11 at 02:47
4

At a more "meta" level, you might be interested in the CRISP-DM process model.

Jouni K. Seppänen
  • 43,139
  • 5
  • 71
  • 100
4

"make" is great because (1) you can use it for all your work in any language (unlike, say, Sweave and Brew), (2) it is very powerful (enough to build all the software on your machine), and (3) it avoids repeating work. This last point is important to me because a lot of the work is slow; when I latex a file, I like to see the result in a few seconds, not the hour it would take to recreate the figures.

dank
  • 841
  • 1
  • 8
  • 15
  • +1 for make; However, I don't see make as incompatible with Sweave. Rather when I produce reports, make calls Sweave (and other things). – Jeromy Anglim Nov 26 '10 at 06:44
4

I use project templates along with R studio, currently mine contains the following folders:

  • info : pdfs, powerpoints, docs... which won't be used by any script
  • data input : data that will be used by my scripts but not generated by them
  • data output : data generated by my scripts for further use but not as a proper report.
  • reports : Only files that will actually be shown to someone else
  • R : All R scripts
  • SAS : Because I sometimes have to :'(

I wrote custom functions so I can call smart_save(x,y) or smart_load(x) to save or load RDS files to and from the data output folder (files named with variable names) so I'm not bothered by paths during my analysis.

A custom function new_project creates a numbered project folder, copies all the files from the template, renames the RProj file and edits the setwd calls, and set working directory to new project.

All R scripts are in the R folder, structured as follow :


00_main.R
  • setwd
  • calls scripts 1 to 5

00_functions.R
  • All functions and only functions go there, if there's too many I'll separate it into several, all named like 00_functions_something.R, in particular if I plan to make a package out of some of them I'll put them apart

00_explore.R
  • a bunch of script chunks where i'm testing things or exploring my data
  • It's the only file where i'm allowed to be messy.

01_initialize.R
  • Prefilled with a call to a more general initialize_general.R script from my template folder which loads the packages and data I always use and don't mind having in my workspace
  • loads 00_functions.R (prefilled)
  • loads additional libraries
  • set global variables

02_load data.R
  • loads csv/txt xlsx RDS, there's a prefilled commented line for every type of file
  • displays which files hava been created in the workspace

03_pull data from DB.R
  • Uses dbplyr to fetch filtered and grouped tables from the DB
  • some prefilled commented lines to set up connections and fetch.
  • Keep client side operations to bare minimum
  • No server side operations outside of this script
  • Displays which files have been created in the workspace
  • Saves these variables so they can be reloaded faster

Once it's been done once I switch off a query_db boolean and the data will reloaded from RDS next time.

It can happen that I have to refeed data to DBs, If so I'll create additional steps.


04_Build.R
  • Data wrangling, all the fun dplyr / tidyr stuff goes there
  • displays which files have been created in the workspace
  • save these variables

Once it's been done once I switch off a build boolean and the data will reloaded from RDS next time.


05_Analyse.R
  • Summarize, model...
  • report excel and csv files

95_build ppt.R
  • template for powerpoint report using officer

96_prepare markdown.R
  • setwd
  • load data
  • set markdown parameters if needed
  • render

97_prepare shiny.R
  • setwd
  • load data
  • set shiny parameters if needed
  • runApp

98_Markdown report.Rmd
  • A report template

99_Shiny report.Rmd
  • An app template
moodymudskipper
  • 46,417
  • 11
  • 121
  • 167
2

For writing a quick preliminary report or email to a colleague, I find that it can be very efficient to copy-and-paste plots into MS Word or an email or wiki page -- often best is a bitmapped screenshot (e.g. on mac, Apple-Shift-(Ctrl)-4). I think this is an underrated technique.

For a more final report, writing R functions to easily regenerate all the plots (as files) is very important. It does take more time to code this up.

On the larger workflow issues, I like Hadley's answer on enumerating the code/data files for the cleaning and analysis flow. All of my data analysis projects have a similar structure.

Brendan OConnor
  • 9,624
  • 3
  • 27
  • 25
2

I'll add my voice to sweave. For complicated, multi-step analysis you can use a makefile to specify the different parts. Can prevent having to repeat the whole analysis if just one part has changed.

PaulHurleyuk
  • 8,009
  • 15
  • 54
  • 78
0

I also do what Josh Reich does, only I do that creating my personal R-packages, as it helps me structure my code and data, and it is also quite easy to share those with others.

  1. create my package
  2. load
  3. clean
  4. functions
  5. do

creating my package: devtools::create('package_name')

load and clean: I create scripts in the data-raw/ subfolder of my package for loading, cleaning, and storing the resulting data objects in the package using devtools::use_data(object_name). Then I compile the package. From now on, calling library(package_name) makes these data available (and they are not loaded until necessary).

functions: I put the functions for my analyses into the R/ subfolder of my package, and export only those that need to be called from outside (and not the helper functions, which can remain invisible).

do: I create a script that uses the data and functions stored in my package. (If the analyses only need to be done once, I can put this script as well into the data-raw/ subfolder, run it, and store the results in the package to make it easily accessible.)

jciloa
  • 1,039
  • 1
  • 11
  • 22