If you look closely to the nlm
function. It asks only one argument. One solution is :
fun <- function(x){
s <- x[1]
y <- x[2]
(10 - s*(t_1-y + y*exp(-t_1/y)))^2+
(20 - s*(t_2-y + y*exp(-t_2/y)))^2+
(30 - s*(t_3-y + y*exp(-t_3/y)))^2
}
p <- array(c(0.4, 0.4), dim = c(2, 1))
# p <- c(0.4, 0.4)
ans <- nlm(f = fun, p = p)
Both vector
or array
work however you can't give two arguments like you did.
EDIT
In numerical optimization the initial point is really important. I advise you to use optim
function which is less sensitive of misspecification of initial point.
One idea is to do like this, you make a grid of many initial points and you select the one which give you the best result :
initialisation <- expand.grid(seq(1, 3, 0.5),
seq(1, 3, 0.5))
res <- data.frame(optim = rep(0, nrow(initialisation)),
nlm = rep(0, nrow(initialisation)))
for(i in 1:nrow(initialisation)){
res[i, 1] <- optim(as.numeric(initialisation[i, ]), fun)$value
res[i, 2] <- try(nlm(f = fun, p = as.numeric(initialisation[i, ]))$minimum, silent = T)
}
res
I insist that with the example above the optim
function is realy more stable. I advise you to use it if you have no other constrains.
You can check function parameters thanks to ?nlm
.
I hope it helps.
EDIT 2
fun <- function(x){
s <- x[1]
y <- x[2]
(10 - s*(t_1-y + y*exp (-t_1/y)))^2+
(20 - s*(t_2-y + y*exp(-t_2/y)))^2+
(30 - s*(t_3-y + y*exp(-t_3/y)))^2
}
I choose this initial point because it seems nearer to the optimal one.
p <- c(10, 1)
ans <- nlm(f = fun, p = p)
You can obtain your two parameters like this :
s is :
s <- ans$estimate[1]
y is :
y <- ans$estimate[2]
You also have the optimal value which is :
ans$minimum :
0.9337047
fun(c(s, y)) :
0.9337047
My second post, the edit is just their to alight the fact that optimisation with nlm
function is a bit tricky because you need ta carefully choose initial value.
The optim
also an optimisation function for R is more stable as in the example I give with many initialization points.
expand.grid
function is useful to obtain a grid like this :
initialisation <- expand.grid(s = seq(2, 3, 0.5),
y = seq(2, 3, 0.5))
initialisation :
s y
1 2.0 2.0
2 2.5 2.0
3 3.0 2.0
4 2.0 2.5
5 2.5 2.5
6 3.0 2.5
7 2.0 3.0
8 2.5 3.0
9 3.0 3.0
res data.frame
gives you the minimum obtain with different initials values.
You see that the first initials values give you no good result for nlm
but relatively stable one for optim
.
res <- data.frame(optim = rep(0, nrow(initialisation)),
nlm = rep(0, nrow(initialisation)))
for(i in 1:nrow(initialisation)){
res[i, 1] <- optim(as.numeric(initialisation[i, ]), fun)$value
res[i, 2] <- if(is.numeric(try(nlm(f = fun, p = as.numeric(initialisation[i, ]))$minimum, silent = T)) == T){
round(nlm(f = fun, p = as.numeric(initialisation[i, ]))$minimum, 8)
}else{
NA
}
}
try
function is just their to avoid the loop to break. The if
is to put NA at the right place.
res :
optim nlm
1 0.9337094 <NA>
2 0.9337058 0.93370468
3 0.9337054 <NA>
4 0.9337101 0.93370468
5 0.9337125 61.18166446
6 0.9337057 0.93370468
7 0.9337120 0.93370468
8 0.9337080 0.93370468
9 0.9337114 0.93370468
When there is NA
values it meens that nlm
doesn't work well because of initialization. I advise you to choose optim
if you don't need really precise optimisation because of its stability.
To an extensive discussion on optim
vs nlm
, you may have a look their. In your specific case optim
seems to be a better choice. I don't know if we could generalise a bit.