Home > Net >  The parameter kl_use_exact in DenseVariational layer of TF
The parameter kl_use_exact in DenseVariational layer of TF

Time:06-24

Trying to create a Bayesian neural network using DenseVariational layer in Tensorflow probability. My question is when we set the parameter kl_use_exact to False does by that we are not taking the prior function into consideration. I tried to look at the source code of the DenseVariational class (_make_kl_divergence_penalty function) and I'm more confused then before, I didn't get the use of kl_use_exact.

CodePudding user response:

kl_use_exact specifies the calculation method of KL-Divergence between variational posterior and prior.

False: KL will be approximated.

True: Uses the actual KL value. However it can be set True if and only if the KL-Divergence between the distributions are registered in TensorFlow-Probability.

  • Related