I know with python and scikit learn, how to calculate KL divergence for Gaussian mixture given that its parameters such as weight, mean, and covariance as np.array,as shown below.
GaussianMixture initialization using component parameters - sklearn
But I am wondering, with Tensorflow, is there any way to calculate KL divergence between two Gaussian mixture given that its parameters as Tensor,
1) I tried scikit above in Tensorflow, but it didn't work since Tensorflow doens'T give it a actual values until session is executed.
2) There are some TF packages, but not exactly KL for Gaussian mixture. https://www.tensorflow.org/api_docs/python/tf/contrib/distributions/Mixture
https://www.tensorflow.org/api_docs/python/tf/distributions/kl_divergence
Any help is greatly appreciated.
Later, I tried with a latest TF library as below.
import tensorflow as tf
print('tensorflow ',tf.__version__) # for Python 3
import numpy as np
import matplotlib.pyplot as plt
ds = tf.contrib.distributions
kl_divergence=tf.contrib.distributions.kl_divergence
# Gaussian Mixure1
mix = 0.3# weight
bimix_gauss1 = ds.Mixture(
cat=ds.Categorical(probs=[mix, 1.-mix]),#weight
components=[
ds.Normal(loc=-1., scale=0.1),
ds.Normal(loc=+1., scale=0.5),
])
# Gaussian Mixture2
mix = 0.4# weight
bimix_gauss2 = ds.Mixture(
cat=ds.Categorical(probs=[mix, 1.-mix]),#weight
components=[
ds.Normal(loc=-0.4, scale=0.2),
ds.Normal(loc=+1.2, scale=0.6),
])
# KL between GM1 and GM2
kl_value=kl_divergence(
distribution_a=bimix_gauss1,
distribution_b=bimix_gauss2,
allow_nan_stats=True,
name=None
)
sess = tf.Session() #
with sess.as_default():
x = tf.linspace(-2., 3., int(1e4)).eval()
plt.plot(x, bimix_gauss1.prob(x).eval(),'r-')
plt.plot(x, bimix_gauss2.prob(x).eval(),'b-')
plt.show()
print('kl_value=',kl_value.eval())
Then I got this error... NotImplementedError: No KL(distribution_a || distribution_b) registered for distribution_a type Mixture and distribution_b type Mixture
I am very sad now. :(