Home > OS >  How Can I Use Tensorflow's `compute_output_shape function' to trace the sizes of feature m
How Can I Use Tensorflow's `compute_output_shape function' to trace the sizes of feature m

Time:12-11

I'm trying to measure the size of my feature maps at each layer of a deep net. I'm doing this because I want to understand the differences between two nets that I thought had equivalent architectures, though expressed differently - one used the Sequential method, the other used the Functional API (see SourceQuestion, if interested).

The model I'm having difficulty characterizing uses the Functional API input. I can't just use the get_shape function, because I didn't specify the Input Shape, so I'm trying to use the compute_output_shapes function instead.

The Functional API model was constructed like using one class to construct VGG-ish blocks and another class to construct the net as follows:

import os
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.utils import plot_model

class Mini_Block2(tf.keras.Model):
    def __init__(self, filters, kernel_size, pool_size=2, strides=1):
        super().__init__()
        self.filters = filters
        self.kernel_size = kernel_size
            
        # Define a Conv2D layer, specifying filters, kernel_size, activation and padding.
        self.conv2D_0 = tf.keras.layers.Conv2D(filters=filters, 
                                                        kernel_size=kernel_size, 
                                                        activation='relu',
                                                        strides=strides,
                                                        padding='valid')
        
        # Define the max pool layer that will be added after the Conv2D blocks
        self.max_pool = tf.keras.layers.MaxPooling2D(pool_size=pool_size, 
                                                     strides=strides,
                                                     padding='valid')
  
    def call(self, inputs):
        # access the class's conv2D_0 layer
        conv2D_0 = self.conv2D_0
        
        # Connect the conv2D_0 layer to inputs
        x = conv2D_0(inputs)

        # Finally, add the max_pool layer
        max_pool = self.max_pool(x)
        
        return max_pool
    
class MiniVGG2(tf.keras.Model):

    def __init__(self, num_classes):
        super().__init__()

        # Creating blocks of VGG with the following 
        # (filters, kernel_size, repetitions) configurations
        self.block_a = Mini_Block2(filters=32, kernel_size=3)
        self.block_b = Mini_Block2(filters=64, kernel_size=3)
        self.block_c = Mini_Block2(filters=128, kernel_size=3)
        self.block_d = Mini_Block2(filters=128, kernel_size=3)        

        # Classification head
        # Define a Flatten layer
        self.flatten = tf.keras.layers.Flatten()
        # Create a Dense layer with 512 units and ReLU as the activation function
        self.fc = tf.keras.layers.Dense(512, activation='relu')
        # Finally add the softmax classifier using a Dense layer
        self.classifier = tf.keras.layers.Dense(1, activation='sigmoid')
    def call(self, inputs):
        # Chain all the layers one after the other
        x = self.block_a(inputs)
        x = self.block_b(x)
        x = self.block_c(x)
        x = self.block_d(x)
        x = self.flatten(x)
        x = self.fc(x)
        x = self.classifier(x)
        return x

vgg2 = MiniVGG2(num_classes=1)

# Compile with losses and metrics
vgg2.compile(optimizer=RMSprop(learning_rate = 1e-4), 
              loss='binary_crossentropy', 
              metrics=['accuracy'])
hist = vgg2.fit(dataset, validation_data=d2, epochs=10)

Tragically, this net tends to run out of memory when I try to train it, unlike a net created using Sequential, which I believe to be identical to this one (see SourceQuestion, if interested).

So, I decided to look at how the sizes of each layer evolved.

I scanned thru my net with this function:

def feature_map_info2(model, input_shape):
    for layer in model.layers:
        mp_err_flag = False
        print(layer.name)

        try:
            # Go to the sub-layers of a Block layer
            print("      Sub-layers:", layer.layers[0].name, ",",layer.layers[1].name)
            output_shape = layer.layers[0].compute_output_shape(input_shape)
            try:
                output_shape = layer.layers[1].compute_output_shape(output_shape.as_list())
            except:
                mp_err_flag = True
        except:
            output_shape = layer.compute_output_shape(input_shape)
        print("     ", output_shape)
        if mp_err_flag:
            print("       Problem computing output shape for",layer.layers[1].name )
        input_shape = output_shape.as_list()
            

        

When I execute it with feature_map_info2(vgg2, [224,224,3]), I get the following output:

mini__block2
      Sub-layers: conv2d , max_pooling2d
      (222, 222, 32)
       Problem computing output shape for max_pooling2d
mini__block2_1
      Sub-layers: conv2d_1 , max_pooling2d_1
      (220, 220, 64)
       Problem computing output shape for max_pooling2d_1
mini__block2_2
      Sub-layers: conv2d_2 , max_pooling2d_2
      (218, 218, 128)
       Problem computing output shape for max_pooling2d_2
mini__block2_3
      Sub-layers: conv2d_3 , max_pooling2d_3
      (216, 216, 128)
       Problem computing output shape for max_pooling2d_3
flatten
      (216, 27648)
dense
      (216, 512)
dense_1
      (216, 1)

I am puzzled, by these 2 problems:

Firstly - Computing output shapes for max pooling layers only avoids an error if I prepend the shape list with None, although the other layers don't require this. Also, although it avoids a computer error, it still gives an incorrect answer, and I have verified that the pool_size is (2,2) as seen in the below sample from a Jupyter notebook:

zz = vgg2.layers[0].layers[1].compute_output_shape([None,224,224,3])
vgg2.layers[0].layers[1].name,vgg2.layers[0].layers[1].pool_size,zz

('max_pooling2d', (2, 2), TensorShape([None, 223, 223, 3]))

Because of the (2,2) pool size, I exoected a TensorShape of [None, 112, 112, 3], not [None,224,224,3]. Also, why did I need to prepend the Shape list with None, when "none" of the other layers required this?

Secondly - The flatten function returns a 2D vector with shape (216, 216*128), instead of (216 * 216 * 128)

Does this make sense to anyone?

CodePudding user response:

First, you should call the feature_map_info2 model with shapes, [None, 224,224,3]

feature_map_info2(vgg2, (None,224, 224, 3))

You have set the stride for the max pooling layer as 1. Set it to 2,

Change your code to-

class Mini_Block2(tf.keras.Model):
    def __init__(self, filters, kernel_size, pool_size=2, strides=1, max_pool_stride=2):
        super().__init__()
        self.filters = filters
        self.kernel_size = kernel_size
            
        # Define a Conv2D layer, specifying filters, kernel_size, activation and padding.
        self.conv2D_0 = tf.keras.layers.Conv2D(filters=filters, 
                                                        kernel_size=kernel_size, 
                                                        activation='relu',
                                                        strides=strides,
                                                        padding='same')
        
        # Define the max pool layer that will be added after the Conv2D blocks
        self.max_pool = tf.keras.layers.MaxPooling2D(pool_size=pool_size, 
                                                     strides=max_pool_stride,
                                                     padding='same')

Calling feature_map_info2(vgg2, (None,224, 224, 3)), you should get,

mini__block2_4
      Sub-layers: conv2d_15 , max_pooling2d_12
      (None, 112, 112, 32)
mini__block2_5
      Sub-layers: conv2d_16 , max_pooling2d_13
      (None, 56, 56, 64)
mini__block2_6
      Sub-layers: conv2d_17 , max_pooling2d_14
      (None, 28, 28, 128)
mini__block2_7
      Sub-layers: conv2d_18 , max_pooling2d_15
      (None, 14, 14, 128)
flatten_3
      (None, 25088)
dense_3
      (None, 512)
dense_4
      (None, 1)
  • Related