The issue is that you are using Metrics::accuracy()
rather than forecast::accuracy()
, which is the function that will accomplish what I think you want. After some explanation of why, I also have some general notes about asking questions on Stack Overflow that might be helpful for you if you have another question for this site in the future.
Metrics::accuracy()
vs. forecast::accuracy()
We can see some differences between the functions if we look at the help files (help("forecast::accuracy")
and help("Metrics::accuracy")
).
The arguments for forecast accuracy are like
accuracy(f, x, test = NULL, d = NULL, D = NULL, ...)
where f
is "An object of class “forecast”, or a numerical vector containing forecasts...." and x
is "An optional numerical vector containing actual values of the same length as object, or a time series overlapping with the times of f." This matches with how you tried to use it, passing as a first argument a forecast class object and as a second the vector of actual values.
If you're wanting to use Metrics::accuracy()
, its arguments are like
accuracy(actual, predicted)
where actual
is "The ground truth vector, where elements of the vector can be any variable type" and predicted
is "The predicted vector, where elements of the vector represent a prediction for the corresponding value in actual." In other words, your first argument would have to be only the predictions themselves, not all the other information present in a forecast
object. I also don't think it gives you the type of accuracy metric you'd want with this sort of analysis; it gives "the proportion of elements in actual that are equal to the corresponding element in predicted".
Some advice for asking questions in the future
First, I'd check out the great resource, How to make a great R reproducible example. Next, I'll give you the code I used to reproduce your issue, and you'll see some changes I had to make to even get started (my comments begin with ###
):
#plotting time series from year 1998 to 2008
### Since we don't have t_AMOUNT, we can't recreate your data
# year.time_series <- ts(t_AMOUNT, start = c(1998), frequency = 12) #Monthly 12
### So I did the following to make some dummy data
set.seed(42)
year.time_series <- ts(rnorm(12*11), start = c(1998), frequency = 12 )
plot(year.time_series)
#splitting the timeseries for further model evaluation
### Since there are spelling changes below for some reason,
### I had to do the next line (or change the variable names below)
year.timeseries <- year.time_series
train <- window(year.timeseries, start=1998, end=2005)
test <- window(year.timeseries, start=2005, end=2008)
#using models to check the accuracy results
### We need the forecast library for ets(),
### but it wasn't loaded in your code
library(forecast)
etsfit <- ets(train)
summary(etsfit)
plot(train, main = "ETS Forecast", ylab = "ets(training set)",
cex.lab = 1.5, cex.main = 1.5, cex.axis = 1.5)
lines(etsfit$fitted, col = "orange")
#forecast
forecast.ets <- forecast(etsfit, h = 24)
summary(forecast.ets)
plot(forecast.ets)
plot(forecast.ets, main = "2 Year Forecast Using ETS Model",
xlim = c(1998, 2008), cex.lab = 1.5, cex.main = 1.5, cex.axis = 1.5)
lines(test, col = "red")
library(Metrics)
#input = forecast values, actual values
accuracy(forecast.ets,test)
forecast::accuracy(forecast.ets, test)