1

I'm using emcee to generate samples with a given ln_prob twice, but both times yield the exact same samples.

I am using the same initial state for both samplers, but I don't see why it should matter.

Am I wrong thinking that it should yield different results?

import emcee
import numpy as np

NWALKERS = 32
NDIM = 2
NSAMPLES = 1000

def ln_gaussian(x):
    # mu = 0, cov = 1
    a = (2*np.pi)** -0.5
    return np.log(a * np.exp(-0.5 * np.dot(x,x)))

p0 = np.random.rand(NWALKERS, NDIM)
sampler1 = emcee.EnsembleSampler(NWALKERS, NDIM, ln_gaussian)
sampler2 = emcee.EnsembleSampler(NWALKERS, NDIM, ln_gaussian)

state1 = sampler1.run_mcmc(p0, 100) # burn in
state2 = sampler2.run_mcmc(p0, 100) # burn in

sampler1.reset()
sampler2.reset()

# run sampler 1k times (x32 walkers)
sampler1.run_mcmc(state1, NSAMPLES)
sampler2.run_mcmc(state2, NSAMPLES)
s1 = sampler1.get_chain(flat=True)
s2 = sampler2.get_chain(flat=True)

s1 - s2

The output is

array([[0., 0.],
   [0., 0.],
   [0., 0.],
   ...,
   [0., 0.],
   [0., 0.],
   [0., 0.]])

If I use different initial states

p0 = np.random.rand(NWALKERS, NDIM)
p1 = np.random.rand(NWALKERS, NDIM)

it yields different samples

array([[-0.70474519, -0.09671908],
       [-0.31555036, -0.33661664],
       [ 0.75735537,  0.01540277],
       ...,
       [ 2.84810783, -2.11736446],
       [-0.55164227, -0.26478868],
       [ 0.01301593, -1.76233017]])

But why should it matter? I thought it's random.

Glance
  • 11
  • 2

0 Answers0