1

Situation depiction: I got a shiny app up and running on a server 24/7 connected to live data via SQL Server updated every day.

Problem: I want to automate table writing process and set it up to create a table every day at 20:00. The write-up process should include an automated refresh of the shiny app as it needs to update & recalculate the numbers from SQL source connection.

Hence, the ideal process would look like: day1 20:00 -->shiny app is re-run, updated data are fed into analysis-->backup table of the result is saved as csv in a directory under the name 'Back-up 1' day2 20:00 -->shiny app is re-run, updated data are fed into analysis-->backup table of the result is saved as csv in a directory under name 'Back-up 2' and so on...

What I seek is R code indication for: 1 how to make a table to be written periodically 2 how to make shiny app refresh periodically

Question Update: Creating a reactive database connection as per advice below: The problem with implementation is that in my Shiny app I create 3 reactive datasets in order to visualise data, The solution would be (I imagine) to build in the reactiveness of the database into each of the 3 reactive dataset, however I wonder if there would be an intelligent way how to do this without repetition of the same lines of code 3 time. Here is the server side of Shiny (not working)

##server.R##
server<-function(input,output, session){

MyData <- reactive({   
invalidateLater(86400000, session) 

#connect to the Server 
connection <- odbcConnect(dns, user, pass)
SituationToday<-{cat("test");sqlQuery(connection, "SELECT ALL * FROM   Table;")}
odbcClose(connection)

#data manipulation of SituationToday dataset including cleaning, filtering, joins, re-coding, labelling & as result I get 2 datasets
df1
df2

#backup
write.csv2(df1, paste0("filepath/Backup1", sys.Date(), ".csv"))
write.csv2(df2, paste0("filepath/Backup2", sys.Date(), ".csv"))

#reactive datasets that I need in order to visualise the data
data.df<-reactive({
VARIABLE<-input$variable
df1[df1$variable %in% VARIABLE,]})

data2.df<-reactive({
VARIABLE2<-input$variable2
df2[df2$variable2 %in% VARIABLE2,]})

data3.df<-reactive({
SELECT<-input$select
GROUP<-input$group
df1[df1$variable %in% SELECT & df1$GROUP %in% GROUP,]})
})

#different outputs follow
output$plot<-renderPlot({
  plot(data.df()) })
output$plot<-renderPlot({
  plot(data.df2()) })
output$plot<-renderPlot({
  plot(data.df3()) })
}

shinyApp(ui=ui,server=server)

How can I keep the connection reactive while creating 3 different reactive datasets within the Server function without having to repeat the 1/3rsd of the server code inside each of the reactive datasets?

Patrik_P
  • 3,066
  • 3
  • 22
  • 39
  • I think I made my point clear: I would like to know how to automate writing a csv file connected to refresh of the App. My R code is here irrelevant while it lacks any idea of how to do what I asked for, I did not have a try as I did not find a reference online, that`s why I ask here. If you necessarily need to start with something here you go: write.table(x, file = "", append = FALSE, quote = TRUE, sep = " ", eol = "\n", na = "NA", dec = ".", row.names = TRUE, col.names = TRUE, qmethod = c("escape", "double"), fileEncoding = "") – Patrik_P Feb 18 '16 at 07:59
  • Why so mad? Have a look at this: http://stackoverflow.com/a/21548386/2923027 – CuriousBeing Feb 18 '16 at 08:03
  • hmm, now we talk. Not mad just this is a problem I really would like to solve, gonna take a look at this, thnx for now @MaxPD – Patrik_P Feb 18 '16 at 08:08
  • This is fundamentally unclear since you don't even give your operating system. Running regular tasks is an operating system thing. – Spacedman Feb 18 '16 at 08:45
  • Thanks for comment @Spacedman, I did not know that before MxPD`s comment – Patrik_P Feb 18 '16 at 08:51
  • Why not update everything inside your Shiny app? Wrap your read-in and write statements in a reactive environment (observe) and autoinvalidate those. – K. Rohde Feb 19 '16 at 13:31
  • @K.Rohde, sounds like that might be something interesting, however, I do not see how to implement it, could you provide me with an example? Generally I find this solution useful http://stackoverflow.com/questions/2793389/scheduling-r-script/21548386#21548386 however the Rscript.exe doesn't recognise the file to execute...even though the R file is listed in the Optional Fields. – Patrik_P Feb 22 '16 at 06:53

1 Answers1

2

As an answer to your comment @Patrik, here is an example on how I generally deal with those things:

(Only providing the server side of the shiny application. UI needs no adjustments.)

server = function(input, output, session){

  YourData <- reactive({                   # Responds to changes. But in here, only the invalidation triggers change.
    invalidateLater(86400000, session)     # Invalidates YourData() after 86400000 millisecs = 24 hours.

    #-------- Some statements that gets Data from SQL Database, e.g. with library RODBC
    conn <- odbcDriverConnect("DNS=DB;UID=Usrname;PWD=12345")
    rawData <- sqlQuery(conn, "SELECT * FROM Table1")
    odbcClose(conn)

    #-------- Some statements that process your Data.
    rawData$value <- rawData$value * 2
    rawData$time <- strptime(rawData$time, format = "%Y-%m-%d %H:%M")

    #-------- Backup creation after data processing.
    write.csv2(rawData, paste0("filepath/Backup", sys.Date(), ".csv"))

    #-------- And finally call the Dataset you want to return to use in your Shiny-App
    rawData 
 })

  output$plot <- renderPlot({
    plot(YourData())
  })
}

With that, your Shiny App keeps on running, but fetches new Data and does all computations and backup creating once every 24 hours.

It might still be a bit abstract, but feel free to ask, if anything is unclear.

Reaction to Updated Question

Problem: MyData() is not just a chunk of code, but an Object with a normal R class, stored as a normal R variable. But in contrast to other variables, Shiny checks on its value permanently to detect changes and it tracks all dependencies to that variable.

Solution: My first approach would be to let the invalidation only affect your Query. And then this effects your other reactive environments in some sort of "cascade".

Here the Code:

server<-function(input,output, session){

  # First, return only your SQL results

  MyData <- reactive({   
    invalidateLater(86400000, session) 

    #connect to the Server 
    connection <- odbcConnect(dns, user, pass)
    SituationToday<-{cat("test");sqlQuery(connection, "SELECT ALL * FROM   Table;")}
    odbcClose(connection)

    SituationToday
  })

  # Second, manipulate you dataframes 1 and 2

  df1 <- reactive({
    SituationToday <- MyData()        # Reacts whenever MayData() changes

    #data manipulation of SituationToday dataset including cleaning, filtering, joins, re-coding, labelling
    # Resulting in your Set df1

    #backup
    write.csv2(df1, paste0("filepath/Backup1", sys.Date(), ".csv"))

    # Return the dataframe
    df1
  })

  # Same for df2

  df2 <- reactive({
    SituationToday <- MyData()        # Reacts whenever MayData() changes

    #data manipulation of SituationToday dataset including cleaning, filtering, joins, re-coding, labelling
    # Resulting in your Set df2

    #backup
    write.csv2(df2, paste0("filepath/Backup2", sys.Date(), ".csv"))

    # Return the dataframe
    df2
  })

  #reactive datasets that I need in order to visualise the data

  data.df <- reactive({
    VARIABLE<-input$variable              # Reacts to input
    df1()[df1()$variable %in% VARIABLE,]  # As well as change in df1()
  })

  data2.df<-reactive({
    VARIABLE2<-input$variable2              # Reacts to input
    df2()[df2()$variable2 %in% VARIABLE2,]  # As well as change in df2()
  })

  data3.df<-reactive({
    SELECT<-input$select
    GROUP<-input$group
    df1()[df1()$variable %in% SELECT & df1()$GROUP %in% GROUP,] # Again df1() dependant
  })

  #different outputs follow

  output$plot<-renderPlot({
    plot(data.df()) })
  output$plot<-renderPlot({
    plot(data.df2()) })
  output$plot<-renderPlot({
    plot(data.df3()) })
}

If you are really annoyed by having to split creating df1 and df2, you could also consider returning a list of your 2 data.frames.

K. Rohde
  • 9,439
  • 1
  • 31
  • 51
  • Hi @K.Rhode, not exactly the way I was thinking about it but a great example that proves to be very useful to me, thanks a lot! Accepted as solution. – Patrik_P Feb 23 '16 at 14:10
  • I got an implementation problem, could you take a look into the update of my question. Help is much appreciated! – Patrik_P Feb 24 '16 at 09:07
  • thanks for the answer! Set it up properly as you pointed out. Now, how it works is that every time a session is opened the data is written. Rather I thought I will leave this app private, just for purposes of data writing instead of initially I planned to have everything in one app>backups, data manipulation, analysis and visualisation... – Patrik_P Feb 25 '16 at 14:40
  • @Patrik Damn, I didn't think of that. Invalidation and stuff is always client dependent. Then I'd recommend splitting up the task again. Let a script fetch data and and store it somewhere (e.g. as "df1.csv") with the use of Windows Scheduler. And your shiny script can auto-invalidate and reload more recent data from this csv. – K. Rohde Feb 25 '16 at 14:49