A simple approach of approximating the maximum likelihood of a model given some data is grid approximation. For example, in R, we can generate a grid of parameter values and then evaluate the likelihood of each value given some data (example from Statistical Rethinking by McElreath):
p_grid <- seq(from=0, to=1, length.out=1000)
likelihood <- dbinom(6, size=9, prob=p_grid)
Here, likelihood
is an array of 1000 values and I assume this is an efficient way to get such an array.
I am new to Julia (and not so good at R) so my approach of doing the same as above relies on comprehension syntax:
using Distributions
p_grid = collect(LinRange(0, 1, 1000))
likelihood = [pdf(Binomial(n9=, p=p), 6) for p in p_grid]
which is not only clunky but somehow seems inefficient because a new Binomial gets constructed 1000 times. Is there a better, perhaps vectorized, approach to accomplishing the same task?