Home > Net >  Invalid reduction dimension 1 for input with 1 dimensions
Invalid reduction dimension 1 for input with 1 dimensions

Time:03-30

I am developing a VAE using this dataset. I have used keras tutorial code and developed my own VAE. However, when I run fit() function I get: Invalid reduction dimension 1 for input with 1 dimensions. for '{{node Sum}} = Sum[T=DT_FLOAT, Tidx=DT_INT32, keep_dims=false](Mean, Sum/reduction_indices)' with input shapes: [?], [2] and with computed input tensors: input[1] = <1 2>. What do I have to change?

The code:

df = pd.read_csv('local path')
data, test_data = train_test_split(df, test_size=0.2)
data.shape #(227845, 31)

class Sampling(layers.Layer):
"""Uses (z_mean, z_log_var) to sample z, the vector encoding a digit."""

def call(self, inputs):
    z_mean, z_log_var = inputs
    batch = tf.shape(z_mean)[0]
    dim = tf.shape(z_mean)[1]
    epsilon = tf.keras.backend.random_normal(shape=(batch, dim))
    return z_mean   tf.exp(0.5 * z_log_var) * epsilon

encoder:

latent_dim = 31

encoder_inputs = keras.Input(shape=(31))
x = layers.Dense(100, activation="relu") (encoder_inputs)
x = layers.Dense(100, activation="relu")(x)
z_mean = layers.Dense(latent_dim, name="z_mean")(x)
z_log_var = layers.Dense(latent_dim, name="z_log_var")(x)
z = Sampling()([z_mean, z_log_var])
encoder = keras.Model(encoder_inputs, [z_mean, 
z_log_var, z], name="encoder")
encoder.summary()

decoder:

latent_inputs = keras.Input(shape=(latent_dim,))
x = layers.Dense(100, activation="relu")(latent_inputs)
x = layers.Dense(100, activation="relu")(x)
decoder_outputs = layers.Dense(31, activation="sigmoid")(x)
decoder = keras.Model(latent_inputs, decoder_outputs, name="decoder")
decoder.summary()

This is where I get the error:

vae = VAE(encoder, decoder)
vae.compile(optimizer=keras.optimizers.Adam())
vae.fit(data, epochs=30, batch_size=128)

CodePudding user response:

The error is coming from tf.reduce_mean and tf.reduce_sum. In the train_step method of the VAE model, change this line:

reconstruction_loss = tf.reduce_mean(
                tf.reduce_sum(
                    keras.losses.binary_crossentropy(data, reconstruction), axis=(1, 2)
                )
            )

To this:

reconstruction_loss = tf.reduce_mean(tf.reduce_sum(
                    keras.losses.binary_crossentropy(data, reconstruction), axis=-1),keepdims=True)

Or:

reconstruction_loss = tf.reduce_mean(tf.reduce_sum(
                    keras.losses.binary_crossentropy(data, reconstruction), axis=-1, keepdims=True))

And it should work.

  • Related