Im working on python deep learning code right now. And I want to know what is going on inside the network I designed. Down here is my sample code Im working on.
My question is, is it possible to see the processed image inside Network? For example, I want to see how my input image changed after "p1" and "p2". Is it possible? If it is possible, how can I see it?
import tensorflow as tf
IMG_WIDTH = 256
IMG_HEIGHT = 256
IMG_CHANNELS = 3
#define input
inputs = tf.keras.layers.Input(shape=(IMG_WIDTH, IMG_HEIGHT, IMG_CHANNELS))
# s = tf.keras.layers.Lambda(lambda x: x/255)(inputs)
#define Contraction path
c1_1 = tf.keras.layers.Conv2D(64,(3,3),activation='relu', padding='same')(inputs)
c1_2 = tf.keras.layers.Conv2D(64,(3,3),activation='relu', padding='same')(c1_1)
p1 = tf.keras.layers.MaxPooling2D((2,2), strides = 2)(c1_2)
c2_1 = tf.keras.layers.Conv2D(128,(3,3),activation='relu', padding='same')(p1)
c2_2 = tf.keras.layers.Conv2D(128,(3,3),activation='relu', padding='same')(c2_1)
p2 = tf.keras.layers.MaxPooling2D((2,2), strides = 2)(c2_2)
CodePudding user response:
Usually, this is a little bit tricky, let me share with you what is on top of my mind:
if you just want to get a sense of what's happing inside a simple neural net check out this link.
If you want to visualize check this repo. you just need to sync the last sections of the notebook with your model, it has a cool animation which you can see for LENET MNIST
More technical concepts getting a sense of how a CNN like model make a decision is covered with topics like XAI, and more specifically grad-cam
Hope these are helpful.