1

I am trying to fit a 5 parameter (a, b, c, d, e) model, where one of the parameters is constrained by another, let's say,

0< d < 1

e < |d|

I am currently using zfit which as far as I know, uses iMinuit

I have only created the zfit.Parameters and put the limits such that the ranges accessible to them are valid, again, let's say:

d = zfit.Parameter('d', value=0.5, lower_limit=0.3, upper_limit=1.0, step_size=0.01)

e = zfit.Parameter('e', value=0.1, lower_limit=0.0, upper_limit=0.3, step_size=0.01)

It has been working well so far, but I think it is not the right way to do it.

So my question is, what is the correct way to deal with this kind of constraint?

Cheers

Horace
  • 62
  • 4

1 Answers1

1

I would use this limits with caution, as they block the variables, ideally, they should be far off the final value.

There are two ways to achieve what you want:

  • either impose a constraint "mathematically" as a logical consequence, so define one parameter from another using a composed parameter (which is a function of other parameters). If possible, this should be the preferred way.
  • Another option is to impose this restrictions in the likelihood with an additional term. This, however, can have repercussions as you modify the likelihood. The minimizer will find a minimum, but this is maybe not the minimum you have looked for. What you can use are SimpleConstraints and add a penalty term to the likelihood if any of the above is violated (e.g. tf.cast(tf.greater(d, 1), tf.float64) * 100.). Maybe make also sure that minuit is run with use_minuit_grad.
Mayou36
  • 4,613
  • 2
  • 17
  • 20
  • Hi! I've just tried the restriction option and I followed [this](https://zfit.readthedocs.io/en/0.5.3/intro/loss.html) to do it, however, I think it's outdated, so I look around and found the [upgrade guide](https://zfit.readthedocs.io/en/0.5.3/project/upgrade_guide.html) and added this code: """ def custom_constraint(): return tf.cast(tf.greater(d, 1.), tf.float64) * 100. dCon = zfit.constraint.SimpleConstraint(custom_constraint, [d]) nll.add_constraints(dCon) """ I just want to check if this is correct, and if there's a way to obtain the actual NLL from the loss obj? – Horace Jul 15 '20 at 08:41
  • Everything was answered in your pyHEP presentation, thanks a lot! – Horace Jul 16 '20 at 15:24
  • Perfect, thanks a lot! I've forgot to mention but I've updated the docs regarding your point above, it was indeed outdated – Mayou36 Jul 16 '20 at 17:05