2

I have followed the instructions in this StMoMo package document to fit Lee Carter to mortality data for Canada.

The next step in my project is to measure the forecast accuracy of the Lee Carter model when fit to this Canadian data.

To do this, I tried to use accuracy() but ran into an error since my Lee Carter fit is of class "fitStMoMo" and not of class "forecast" or a time series.

Is there an alternative forecast accuracy function that I can use on "fitStMoMo" objects that will calculate Mean Error, Root Mean Squared Error, Mean Absolute Error, Mean Percentage Error, Mean Absolute Percentage Error and Mean Absolute Scaled Error for me?

Reprex

Reprex created using EWMaleData as used in the StMoMo document to flag the error specifically:

library("StMoMo")
library("demography")
library("forecast")

constLC <- function(ax, bx, kt, b0x, gc, wxt, ages){
    c1 <- mean(kt[1, ], na.rm = TRUE)
    c2 <- sum(bx[, 1], na.rm = TRUE)
    list(ax = ax + c1 * bx, bx = bx / c2, kt = c2 * (kt - c1))
}
LC <- StMoMo(link = "logit", staticAgeFun = TRUE, periodAgeFun = "NP",
constFun = constLC)
LC <- lc(link = "logit")
LC$gnmFormula
#> [1] "D/E ~ -1 + offset(o) + factor(x) + Mult(factor(x), factor(t), inst = 1)"

EWMaleData
#> Mortality data for England and Wales
#>     Series:  male
#>     Years: 1961 - 2011
#>     Ages:  0 - 100
#>     Exposure:  central

EWMaleIniData <- central2initial(EWMaleData)
ages.fit <- 55:89
wxt <- genWeightMat(ages = ages.fit, years = EWMaleIniData$years,
clip = 3)
LCfit <- fit(LC, data = EWMaleIniData, ages.fit = ages.fit, wxt = wxt)
#> StMoMo: The following cohorts have been zero weigthed: 
#>   1872 1873 1874 1954 1955 1956 
#> StMoMo: Start fitting with gnm
#> Initialising
#> Running start-up iterations..
#> Running main iterations.....
#> Done
#> StMoMo: Finish fitting with gnm

LCfor <- forecast(LCfit, h = 50)
class(LCfit)
#> [1] "fitStMoMo"
class(LCfor)
#> [1] "forStMoMo"
accuracy(LCfit)
#> Error in accuracy.default(LCfit): First argument should be a forecast object 
#>   or a time series.
accuracy(LCfor)
#> Error in accuracy.default(LCfor): First argument should be a forecast object
#>   or a time series.
Community
  • 1
  • 1
Vicky
  • 23
  • 6
  • 1
    The question is too broad as likely to be closed. Don't ask so many questions in one question. In order to ask a better question please read [How to ask a good question](https://stackoverflow.com/help/how-to-ask) and [Minimal, Complete, and Verifiable Example](https://stackoverflow.com/help/mcve) and [How to make a great R reproducible example](https://stackoverflow.com/questions/5963269/how-to-make-a-great-r-reproducible-example). – Rui Barradas Nov 21 '18 at 11:35
  • @RuiBarradas Okay I will edit it to just ask one question. – Vicky Nov 21 '18 at 11:40
  • This answer has a very low chance of being answered unless you supply some reproducible code. Assuming the Canadian data isn't critical for your question, I suggest basing your example code instead on the data set supplied with the `StMoMo` package. – AkselA Nov 21 '18 at 11:58
  • @AkselA I'm working on my reproducible example now! – Vicky Nov 21 '18 at 12:06
  • @RuiBarradas I have added a reprex now – Vicky Nov 21 '18 at 12:25
  • @AkselA I have added a reprex now – Vicky Nov 21 '18 at 12:25
  • Nice. Normally we don't include the `install` calls, trusting people will do that in their preferred way. If the packages aren't on CRAN a pointer would be helpful, but in this case it's all very straight forward. As to your question I suspect we'll have to do the forecast verification manually (ie. write our own function). – AkselA Nov 21 '18 at 13:11
  • @AkselA ah okay! I've never written my own function before. Any pointers as to how I'd go about this? – Vicky Nov 21 '18 at 13:14
  • I've nominated the question for reopening. I believe what Vicky is asking for is a way to cross-validate the Lee Carter mortality model, and I believe I have a (naïve) answer for that. – AkselA Nov 21 '18 at 14:18
  • @AkselA thanks a lot! Hopefully it gets reopened. – Vicky Nov 21 '18 at 14:29

1 Answers1

1

I'm not entirely sure about how accuracy() from forecast works, but in some way it must be comparing real and predicted values, and returning metrics on how much they differ. This can, in a broad sense, be regarded as a form of cross-validation. As accuracy() doesn't work for StMoMo objects, we might as well develop a cross-validation routine ourself.
For a short primer on this form of cross-validation, I'd recommend Rob Hyndman's notes on tsCV() from forecast. It would have been nice if we could use tsCV() here, but it only works for univariate time series, and mortality data is essentially multivariate time series.
I should also mention that before today I had never heard about Mortality Modeling, so the model theory part of this I'm very fuzzy on.

This first bit is identical to what was already posted

library(StMoMo)
library(demography)
library(forecast)

data(EWMaleData)

constLC <- function(ax, bx, kt, b0x, gc, wxt, ages){
    c1 <- mean(kt[1, ], na.rm = TRUE)
    c2 <- sum(bx[, 1], na.rm = TRUE)
    list(ax = ax + c1 * bx, bx = bx / c2, kt = c2 * (kt - c1))
}

LC <- StMoMo(link="logit", staticAgeFun=TRUE, periodAgeFun="NP", constFun=constLC)
LC <- lc(link="logit")

EWMaleIniData <- central2initial(EWMaleData)

Then things get a little different. The central point of performing CV on a time series is to do a prediction on data we actually have, but we pretend we don't. Therefore we'll have to subset our data so that the chunk we want to predict on isn't part of the model. In this concrete example we'll use the first 30 years, and then predict the next 10

ages.fit <- 55:89
years.fit <- EWMaleIniData$years[1]:(EWMaleIniData$years[1] + 30)
years.for <- 10

wxt <- genWeightMat(ages=ages.fit, years=years.fit, clip=3)
LCfit <- fit(LC, data=EWMaleIniData, ages.fit=ages.fit,
  years.fit=years.fit, wxt=wxt)
LCfor <- forecast(LCfit, h=years.for)

Now that we have a ten-year forecast, we can compare those years to our actual data and use what ever error measure we want to see how accurate the prediction is.

cvy <- LCfor$years  # years used in forecast
cva <- LCfor$ages   # ages used in forecast

pred <- LCfor$rates # predicted mortality rates

# actual mortality rates subset to the same ages and years as forecast
actual <- EWMaleIniData$Dxt/EWMaleIniData$Ext
actual <- actual[rownames(actual) %in% cva,
                 colnames(actual) %in% cvy]

# A collection of error measures. plenty of others can be devised
err <- pred - actual
Q <- pred/actual
rmse <- sqrt(rowMeans(err^2))
mae <- rowMeans(abs(err))
smape <- 100 * (rowMeans(exp(abs(log(Q)))) - 1)

This bit is purely for displaying the results

par(mfrow=c(3, 1), mar=c(3, 3, 1, 1), mgp=c(2, 0.8, 0), oma=c(0, 0, 2, 0))
plot(as.numeric(names(rmse)), rmse, type="h", xlab="")
plot(as.numeric(names(mae)), mae, type="h", xlab="")
plot(as.numeric(names(smape)), smape, type="h", xlab="Ages")
mtext(paste("Forecast accuracy for the years", 
  paste(cvy[c(1, years.for)], collapse=" - ")), 
  3, outer=TRUE)

enter image description here

As seen in Hyndman's notes, to do this properly we'd have to do this comparison using forecasts at several points in our time series, and the average the scores.

AkselA
  • 8,153
  • 2
  • 21
  • 34
  • That's really great, thanks so much! Very useful. You've got me out of my 2 day slump! Thanks again! – Vicky Nov 21 '18 at 22:21