Gibbs Sampling Using Edward




Tuesday, May 8, 2018
Gibbs Sampling Using Edward

Gibbs sampling is a MCMC method to draw samples from a complex distribution (usually a posterior in Bayesian inference).

In this post I aim to show how to do Gibbs sampling using Edward, “a Python library for probabilistic modeling”. If you are new to Edward, you can install the package by following up these steps.

Let assume we have a multivariate normal random variable X=[X0,X1]X=[X^0, X^1]:
XN([3.,2.],[[1,0.6],[0.6,1]]) X \sim \mathcal{N}([3., 2.], [[1, -0.6], [-0.6, 1]])
The following image show the 2d projection of XX pdf.


We can define the multivariate normal in Edward as follows:

mu = [3., 2.]  
cov = [[1, -0.6], [-0.6, 1]]  
scale = tf.cholesky(cov)  
mvn = ed.models.MultivariateNormalTriL(loc=mu, scale_tril=scale)

For Gibbs sampling we need to

  1. Calculate the marginal pdf of each dimension, X0X^0, X1X^1.
  2. Initialize each random variable with random values, x00x_0^0 and x01x_0^1
  3. Draw sample xt0P(X0X1=xt11)x_t^0\sim P(X^0| X^1=x^1_{t-1}) and xt1P(X1X0=xt10)x_t^1\sim P(X^1| X^0=x^0_{t-1})
  4. Repeat 3, NN times
  5. Keep (xt0,xt1)(x_t^0, x_t^1) for t=M,...,Nt= M, ..., N where NN is total number of samples and MM is burning samples.

In Edward, we can define the marginal distribution as follows:

x0 = tf.placeholder(shape=[], dtype=tf.float32)  
x1 = tf.placeholder(shape=[], dtype=tf.float32)  
normal0_given1 = ed.models.Normal(loc=mu[0]+cov[0][1]/cov[1][1]*(x1-mu[1]), scale=cov[1][1]-cov[0][1]**2/cov[0][0])  
normal1_given0 = ed.models.Normal(loc=mu[1]+cov[1][0]/cov[0][0]*(x0-mu[0]), scale=cov[0][0]-cov[1][0]**2/cov[1][1])  

In above code x0 and x1 are two place holders for samples of X0X^0 and X1X^1 from previous iteration.

Now its time to implement #4 and #5:

N = 5000  
with sess:  
    x1_sample = 0  
  data_sampled = [[0, 0]] * N  
    for i in range(N):  
        x0_sample = sess.run(normal0_given1.sample(1), {x1: x1_sample})[0]  # x0 ~ P(X0| X1=x1)
        x1_sample = sess.run(normal1_given0.sample(1), {x0: x0_sample})[0]  # x1 ~ P(X1| X0=x0)
        data_sampled[i] = [x0_sample, x1_sample]

data_sample above contains the

plt.hist2d([x[0] for x in data_sampled], [x[1] for x in data_sampled], bins=100)  
plt.show()

Here we go. Edward helped us to write Gibbs sampling with less than 10 line of codes. The whole code can be found on github.

1 comments:

  1. Thanks for sharing amazing information about python .Gain the knowledge and hands-on experience in python online training

    ReplyDelete

 

Favorite Quotes

"I have never thought of writing for reputation and honor. What I have in my heart must out; that is the reason why I compose." --Beethoven

"All models are wrong, but some are useful." --George Box

Copyright © 2015 • Ensemble Blogging