Lets suppose that I have a input layer with shape (h,w,f) = (1 x 1 x 256 )
And let me make two sequence
case 1 :
input = keras.models.Input((1,1,256))
x = keras.layers.Conv2d(f= 32, k=(1,1),s = 1)(input)
x = keras.layers.ReLU()(x)
x = keras.layers.Conv2d(f= 256, k=(1,1),s = 1)(x)
case 2 :
input = keras.models.Input((1,1,256))
x = keras.layers.Flatten()(input)
x = keras.layers.Dense(32)(x)
x = keras.layers.ReLU()(x)
x = keras.layers.Dense(256)(x)
x = keras.layers.reshape((1,1,256))(x)
In this 2 cases are the output x are same?
I am making an SE-Net-like attention module but not same.
CodePudding user response:
Yes, and you do not need to apply Flatten()
and Reshape()
in code 2. Dense will be applied on the last channel automatically.