3

I'm trying to fit a function in R and therefor I use nls(). Is there a way to prevent the fitted function from falling below zero?

An easy work around would be to rise the parameter b0 in the target function after the fit, but this is actually not what I want because I expect a real fit with the constraint of beeing positive to lead to a better result.

y=c(m1,m2,m3,m4,m5,m6,m7,m8,m9,m10)
d=data.frame(seq(1, 10, 1),y=y)
fitFun <- function(x, add, b0, b1) {b0 + (x+add)^b1}
m=nls(y~fitFun(x,add,intercept,power),d,start=list(intercept=1,power=3.5,add=2),trace=T)
Tetsujin no Oni
  • 7,300
  • 2
  • 29
  • 46
B1ANCHi
  • 33
  • 3

3 Answers3

3

Are you looking for this? Constraining the parameters to make the prediction non-negative can be tricky if the prediction is a hard-to-invert function of the parameters, but in this case we just have to require b0>=0 ... using @Roland's example,

fit2 <- nls(y~b0+(x+add)^b1,
            algorithm="port",
            lower=c(b0=0,b1=-Inf,add=-Inf),
            data=df,start=list(b0=1,b1=3.5,add=2))
lines(predict(fit2)~df$x,col="purple")

In the following the blue is the original unconstrained fit; red is @Roland's fit; and purple is the fit above.

enter image description here

Ben Bolker
  • 211,554
  • 25
  • 370
  • 453
  • @B1ANCHi But you should reconsider your model if the data clearly violates its constraints as is the case in this example. – Roland Dec 03 '12 at 08:24
2

You need to change your model. For that you need to define what should happen if the function values would fall below zero. Here is an example, which sets these values to 0.

x <- 1:200/100
set.seed(42)
y <- -10+(x+1)^3.5+rnorm(length(x),sd=3)
df <- data.frame(x,y)

plot(y~x,data=df)

fitFun <- function(x, add, b0, b1) {
    res <- b0 + (x+add)^b1
    res[res<0] <- 0
    res
}
fit <- nls(y~fitFun(x,add,intercept,power),
           data=df,start=list(intercept=1,power=3.5,add=2))
summary(fit)
lines(predict(fit)~df$x,col="red")

fit result

Roland
  • 127,288
  • 10
  • 191
  • 288
  • thank for the answer, I will think about how I could change my model, but actually I see the model as fixed and would rather tell the optimizer that the functionvalues should stay above zero e.g. as additional condition in the optimization process – B1ANCHi Dec 02 '12 at 13:38
  • @B1ANCHi I think you are confused. If you put constraints on the outcome of your model, you effectively change your model. You can do that by actually changing the model or take the cheap way out, which I've demonstrated here. The optimizer is only concerned if you want to constrain the model parameters. – Roland Dec 02 '12 at 14:37
0

Thanks a lot for the answers. Maybe I didn't give enough information about my problem, but I'm not yet allowed to post pictures and describing everything would have led to a short story.

@Roland was perfectly right it's not the optimizers task to care about the behaviour of the target function, but as I mentioned I assume the model to be fix.

@Ben Bolker's suggestion to limit the additive part of the function to positive values led to an unsatifying result.

What I didn't mention was that m1 to m10 are mean values of a data collection I recorded. I solved my problem by using the variance of the recorded series as weights during the fitting process.

y=c(m1,m2,m3,m4,m5,m6,m7,m8,m9,m10)
d=data.frame(seq(1, 10, 1),y=y)
vars = c(var(lt1$V1),var(lt2$V1),var(lt3$V1),var(lt4$V1),var(lt5$V1),var(lt6$V1),var(lt7$V1),var(lt8$V1),var(lt9$V1),var(lt10$V1))
weights = rep(max(vars),10)/vars
fitFun <- function(x, add, b0, b1) {b0 + (x+add)^b1}
m=nls(y~fitFun(x,add,intercept,power),d,weights=weights,start=list(intercept=1,power=3.5,add=2),trace=T)
B1ANCHi
  • 33
  • 3