So I have this code below which takes my processed data and puts it into my model:
with np.load("/content/data.npz") as data:
train_examples = data["features"]
train_labels = data["labels"]
train_dataset = tf.data.Dataset.from_tensor_slices((train_examples, train_labels))
print(train_dataset)
# eventually we need to add validation for accuracy purposes
# for better accuracy and strength increase this
model = Sequential()
model.add(Conv2D(filters=10, kernel_size=1, activation="relu", input_shape=(14, 8, 8)))
model.add(MaxPooling2D(pool_size=2, strides=None))
model.add(Flatten())
model.add(BatchNormalization())
model.add(Dense(1, activation="sigmoid"))
model.compile(optimizer=optimizers.Adam(5e-4), loss="mean_squared_error")
model.summary()
checkpoint_filepath = "/tmp/checkpoint/"
model_checkpointing_callback = ModelCheckpoint(
filepath=checkpoint_filepath,
save_best_only=True,
)
model.fit(
train_examples,
train_labels,
epochs=1000,
verbose=1,
callbacks=[
callbacks.ReduceLROnPlateau(monitor="loss", patience=10),
callbacks.EarlyStopping(monitor="loss", patience=15, min_delta=1e-4),
model_checkpointing_callback,
],
)
model.save("model.h5")
Now, I don't know if I just misunderstanding how tensors work, but if I do model.fit(train_dataset)
I get the error Input layer 0 of model expects input shape of (None, 14, 8, 8) but got (14,8,8)
however, when I pass in the data directly with model.fit(train_examples, train_labels)
it works
From reading the tf examples, if I have a bunch of images of 28x28 pixels, in a array of some size of 28x28, then if I create a dataset from the images, and have my models input shape defined as (28,28), then it would work right?
CodePudding user response:
When you call model.fit
with train_examples
and train_labels
it is using the default batch_size
of 32. In order to make the tf.Data
to work with model.fit
you will have to batch the dataset.
You can try
batch_train_dataset = train_dataset.batch(32)
Documentation regarding this is at url