Home > Enterprise >  How to implement Multinomial conditional distributions depending on the conditional binary value in
How to implement Multinomial conditional distributions depending on the conditional binary value in

Time:09-27

I am trying to build a graphical model in Tensorflow Probability, where we first sample a number of positive (1) and negative (0) examples (count_i) from Categorical distribution and then construct Multinomial distribution (Y_i) depending on the value of (count_i). These events (Y_i) are mutually exclusive :

Y_1 ~ Multinomial([.9, 0.1, 0.05, 0.05, 0.1], total_count = [tf.reduce_sum(tf.cast(count==1, tf.float32))

Y_2 ~ Multinomial([0.99, 0.01, 0., 0., 0.], total_count = [tf.reduce_sum(tf.cast(count==0, tf.float32))

I have read these tutorials, however I am stuck with two issues:

  1. This code generates two arrays of length 500, whereas I only need 1 array of 500. What should I change so we only get 1 sample from Categorical distribution and then depending on the overall count of the value we are conditioning on, Multinomial is constructed ?
  2. The sample from Categorical distribution gives only values of 0, whereas it should be a blend between 0 and 1. What am I doing wrong here?

My code is as follows. You can run these to replicate the behaviour:

def simplied_model():
  return tfd.JointDistributionSequential([
      tfd.Uniform(low=0., high = 1., name = 'e'), #e
      
      lambda e: tfd.Sample(tfd.Categorical(probs = tf.stack([e, 1.-e], 0)), sample_shape =500), #count #should it be independent?
      lambda count: tfd.Multinomial(probs = tf.constant([[.9, 0.1, 0.05, 0.05, 0.1], [0.99, 0.01, 0., 0., 0.]]), total_count  = tf.cast(tf.stack([tf.reduce_sum(tf.cast(count==1, tf.float32)),tf.reduce_sum(tf.cast(count==0, tf.float32))], 0), dtype= tf.float32))
  ])

tt = simplied_model()
tt.resolve_graph()
tt.sample(1)

CodePudding user response:

The first array will be your Y_{1} and the second will be your Y_{2}. The key is that your output will always be of shape (2, 5) because that is the length of the probabilities you are passing to tfd.Multinomial.

Code:

import tensorflow as tf
import tensorflow_probability as tfp

from tensorflow_probability import distributions as tfd


# helper function
def _get_counts(vec):
  zeros = tf.reduce_sum(tf.cast(vec == 0, tf.float32))
  ones = tf.reduce_sum(tf.cast(vec == 1, tf.float32))
  return tf.stack([ones, zeros], 0)

joint = tfd.JointDistributionSequential([
    tfd.Sample(  # sample from uniform to make it 2D
        tfd.Uniform(0., 1., name="e"), 1),
    lambda e: tfd.Sample(
        tfd.Categorical(probs=tf.stack([e, 1.-e], -1)), 500),
    lambda c: tfd.Multinomial(
        probs=[
            [0.9, 0.1, 0.05, 0.05, 0.1],
            [0.99, 0.01, 0., 0., 0.],
        ],
        total_count=_get_counts(c),
    )
])

joint.sample(5)  # or however many you want to sample

Output:

# [<tf.Tensor: shape=(5, 1), dtype=float32, numpy=
#  array([[0.5611458 ],
#         [0.48223293],
#         [0.6097224 ],
#         [0.94013655],
#         [0.14861858]], dtype=float32)>,
#  <tf.Tensor: shape=(5, 1, 500), dtype=int32, numpy=
#  array([[[1, 0, 0, ..., 1, 0, 1]], 
#  
#         [[1, 1, 1, ..., 1, 0, 0]],
#  
#         [[0, 0, 0, ..., 1, 0, 0]],
#  
#         [[0, 0, 0, ..., 0, 0, 0]],
#  
#         [[1, 0, 1, ..., 1, 0, 1]]], dtype=int32)>,
#  <tf.Tensor: shape=(2, 5), dtype=float32, numpy=
#  array([[ 968.,  109.,    0.,    0.,    0.],
#         [1414.,    9.,    0.,    0.,    0.]], dtype=float32)>]
  • Related