9

There are cases when I'm not actually interested in the full posterior of a Bayesian inference, but simply the maximum likelihood (or maximum a posterior for suitably chosen priors), and possibly it's Hessian. PyMC3 has functions to do that, but find_MAP seems to return the model parameters in transformed form depending on the prior distribution on them. Is there an easy way to get the untransformed values from these? The output of find_hessian is even less clear to me, but it's most likely in the transformed space too.

burnpanck
  • 1,955
  • 1
  • 12
  • 36

2 Answers2

6

May be the simpler solution will be to pass the argument transform=None, to avoid PyMC3 doing the transformation and then using find_MAP

I let you and example for a simple model.

data = np.repeat((0, 1), (3, 6))
with pm.Model() as normal_aproximation:
    p = pm.Uniform('p', 0, 1, transform=None)
    w = pm.Binomial('w', n=len(data), p=p, observed=data.sum())
    mean_q = pm.find_MAP()
    std_q = ((1/pm.find_hessian(mean_q))**0.5)[0]
print(mean_q['p'], std_q)

Have you considered using ADVI?

aloctavodia
  • 2,040
  • 21
  • 28
  • 1
    I have, but in my case the model is simple enough that the MAP+hessian characterise the posterior distribution very well (it is close to normal). MAP+hessian is much faster to compute than ADVI or MC-sampling. Of course, that is abuse of pymc3 (by not sampling), but their model specification is great, and I want to keep the option of sampling in case the changes in the model/data lead to more complicated posteriors. – burnpanck Nov 23 '16 at 10:44
1

I came across this once more and found a way to get the untransformed values from the transformed ones. Just in case somebody else needs this as-well. The gist of it is that the untransformed values are essentially theano expressions that can be evaluated given the transformed values. PyMC3 helps here a little by providing the Model.fn() function which creates such an evaluation function accepting values by name. Now you only need to supply the untransformed variables of interest to the outs argument. A complete example:

data = np.repeat((0, 1), (3, 6))
with pm.Model() as normal_aproximation:
    p = pm.Uniform('p', 0, 1)
    w = pm.Binomial('w', n=len(data), p=p, observed=data.sum())
    map_estimate = pm.find_MAP()
# create a function that evaluates p, given the transformed values
evalfun = normal_aproximation.fn(outs=p)
# create name:value mappings for the free variables (e.g. transformed values)
inp = {v:map_estimate[v.name] for v in model.free_RVs}
# now use that input mapping to evaluate p
p_estimate = evalfun(inp) 

outs can also receive a list of variables, evalfun will then output the values of the corresponding variables in the same order.

burnpanck
  • 1,955
  • 1
  • 12
  • 36