Home > other >  Conv2D is incompatible with the layer in a GAN
Conv2D is incompatible with the layer in a GAN

Time:02-27

I am developing a GAN using the Mnist dataset. I have developed the Generator and Discriminator. However, when I combine them together I get this error: Input 0 of layer "conv2d" is incompatible with the layer: expected axis -1 of input shape to have value 1, but received input with shape (None, 57, 57, 1024). Does anyone know why this happens? Do I have to add something else?

The preprocessing:

(x_train, _), (x_test, _) = mnist.load_data()

x_train = x_train.reshape(60000, 28, 28, 1)
x_test = x_test.reshape(10000, 28, 28, 1)
x_train = x_train.astype('float32')/255
x_test = x_test.astype('float32')/255
img_rows, img_cols = 28, 28
channels = 1
img_shape = (img_rows, img_cols, channels)

The Generator:

def generator():
   model = Sequential()
   model.add(Conv2DTranspose(32, (3,3), strides=(2, 2), activation='relu', use_bias=False, 
   input_shape=img_shape))
   model.add(BatchNormalization(momentum=0.3))
   model.add(Conv2DTranspose(64,(3,3),strides=(2,2), activation='relu', padding='same', 
   use_bias=False)) 
   model.add(MaxPooling2D(pool_size=(2, 2)))
   model.add(LeakyReLU(alpha=0.2))

   model.add(Conv2DTranspose(64,(3,3),strides=(2,2), activation='relu', padding='same', 
   use_bias=False))
   model.add(MaxPooling2D(pool_size=(2, 2)))
   model.add(Dropout(0.5))
   model.add(BatchNormalization(momentum=0.3))
   model.add(LeakyReLU(alpha=0.2))

   model.add(Dense(512, activation=LeakyReLU(alpha=0.2)))
   model.add(BatchNormalization(momentum=0.7))
   model.add(Dense(1024, activation='tanh'))

   model.summary()
   model.compile(loss=keras.losses.binary_crossentropy, optimizer=Adam(learning_rate=0.02))
   return model

generator = generator()

The Discriminator:

def discriminator():
    model = Sequential()
    model.add(Conv2D(32, (5,5), strides=(2, 2), activation='relu', use_bias=False, 
    input_shape=img_shape))
    model.add(BatchNormalization(momentum=0.3))
    model.add(Conv2D(64,(5,5),strides=(2,2), activation='relu', padding='same', 
    use_bias=False))
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(LeakyReLU(alpha=0.2))

    model.add(Conv2D(64,(5,5),strides=(2,2), activation='relu', padding='same', 
    use_bias=False))
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.5))
    model.add(BatchNormalization(momentum=0.3))
    model.add(LeakyReLU(alpha=0.2))

    model.add(Dense(512, activation=LeakyReLU(alpha=0.2)))
    model.add(BatchNormalization(momentum=0.7))
    model.add(Dense(1024, activation='tanh'))

    model.summary()
    model.compile(loss=keras.losses.binary_crossentropy, optimizer=Adam(learning_rate=0.02)) 

    return model

discriminator = discriminator()

Both models combined (Where I get the error):

def GAN(generator, discriminator):
   model = Sequential()
   model.add(generator)
   discriminator.trainable = False
   model.add(discriminator)

   model.summary()
   model.compile()

   return model

gan = GAN(generator, discriminator)

CodePudding user response:

Your generator needs to produce images, thus the output shape of the generator must be the same shape as the images. The activation also must be compatible with the range in the images. I don't think your images go from -1 to 1, so you should not use "tanh". You must choose an activation compatible with the images.

Last generator layer:

Dense(img_shape[-1], ...)

Your discriminator needs to say whether the images are true or false, thus its output must have one value only, 0 or 1.

Last discriminator layer:

Dense(1, activation="sigmoid")
  • Related