Answer
logprob = dist.log_prob(sample)
means to get the logarithmic probability (logprob
) of one experiment sample (sample
) under a specific distribution (dist
).
(It's awkward to understand, takes a while to grasp the below explaining.)
Explaining
(We use an easy example to understand what does log_prob
do?)
Forward test
Firstly, generate a probability a
by using a uniform distribution bouned in [0, 1]
,
import torch.distributions as D
import torch
a = torch.empty(1).uniform_(0, 1)
a
# OUTPUT: tensor([0.3291])
Based on this probability and D.Bernoulli
, we can instantiate a Bernoulli distribution b=D.Bernoulli(a)
(which means, the result of every Bernoulli experiment, b.sample()
, is either 1
with probability a=0.3291
or 0
with probability 1-a=0.6709
),
b = D.Bernoulli(a)
b
# OUTPUT: Bernoulli()
We can verify this with one Bernoulli experiment to get a sample c
(hold that c
have 0.3291
probability to be 1
, while 0.6709
probability to be 0
),
c = b.sample()
c
# OUTPUT: tensor([0.])
With the Bernoulli distribution b
and the sample c
, we can get the logarithmic probability of c
(Bernoulli experiment sample) under the distribution b
(a specific Bernoulli distribution with 0.3291 to be TRUE) as, (or officially, the log of the probability density/mass function evaluated at value (c
))
b.log_prob(c)
b
# OUTPUT: tensor([-0.3991])
Backward verify
As we already know, the probability for each sample to be 0
(for one experiment, the probability can be simply viewed as its probability density/mass function) is 0.6709
, so we can verify the log_prob
result with,
torch.log(torch.tensor(0.6709))
# OUTPUT: tensor(-0.3991)
It equals the logarithmic probability of c
under b
. (Finished!)
Hope it's useful for you.