I have a question about deep learning with keras. I have programmed a custom data generator because I was running out of memory and I need to load x by x samples for training because I am using nifty images with a big size. I tried several solutions of this forum, but as they are 3d images, they cannot be used in my model. The problem comes in the training command (fit) that throws an error:
ValueError: Layer "3dcnn" expects 1 input(s), but it received 16 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, 208, 150, 10) dtype=float32>, <tf.Tensor 'IteratorGetNext:1' shape=(None, 208, 150, 10) dtype=float32>, <tf.Tensor 'IteratorGetNext:2' shape=(None, 208, 150, 10) dtype=float32>, <tf.Tensor 'IteratorGetNext:3' shape=(None, 208, 150, 10) dtype=float32>, <tf.Tensor 'IteratorGetNext:4' shape=(None, 208, 150, 10) dtype=float32>, <tf.Tensor 'IteratorGetNext:5' shape=(None, 208, 150, 10) dtype=float32>, <tf.Tensor 'IteratorGetNext:6' shape=(None, 208, 150, 10) dtype=float32>, <tf.Tensor 'IteratorGetNext:7' shape=(None, 208, 150, 10) dtype=float32>, <tf.Tensor 'IteratorGetNext:8' shape=(None, 208, 150, 10) dtype=float32>, <tf.Tensor 'IteratorGetNext:9' shape=(None, 208, 150, 10) dtype=float32>, <tf.Tensor 'IteratorGetNext:10' shape=(None, 208, 150, 10) dtype=float32>, <tf.Tensor 'IteratorGetNext:11' shape=(None, 208, 150, 10) dtype=float32>, <tf.Tensor 'IteratorGetNext:12' shape=(None, 208, 150, 10) dtype=float32>, <tf.Tensor 'IteratorGetNext:13' shape=(None, 208, 150, 10) dtype=float32>, <tf.Tensor 'IteratorGetNext:14' shape=(None, 208, 150, 10) dtype=float32>, <tf.Tensor 'IteratorGetNext:15' shape=(None, 208, 150, 10) dtype=float32>]
The code is as follows:
def get_model(width=208, height=150, depth=50):
"""Build a 3D convolutional neural network model."""
inputs = keras.Input((width, height, depth, 1))
x = layers.Conv3D(filters=64, kernel_size=3, activation="relu")(inputs)
x = layers.MaxPool3D(pool_size=2)(x)
x = tf.keras.layers.BatchNormalization()(x)
x = layers.Conv3D(filters=64, kernel_size=3, activation="relu")(x)
x = layers.MaxPool3D(pool_size=2)(x)
x = tf.keras.layers.BatchNormalization()(x)
x = layers.Conv3D(filters=128, kernel_size=3, activation="relu")(x)
x = layers.MaxPool3D(pool_size=2)(x)
x = tf.keras.layers.BatchNormalization()(x)
x = layers.Conv3D(filters=256, kernel_size=3, activation="relu")(x)
x = layers.MaxPool3D(pool_size=2)(x)
x = tf.keras.layers.BatchNormalization()(x)
x = layers.GlobalAveragePooling3D()(x)
x = tf.keras.layers.Dense(units=512, activation="relu")(x)
x = layers.Dropout(0.3)(x)
outputs = tf.keras.layers.Dense(units=3, activation="softmax")(x)
# Define the model.
model = keras.Model(inputs, outputs, name="3dcnn")
return model
#Get ALL the training images to batch/split/iterate from batch size to batch size
train_data_generator = CustomDataGenerator(
batch_size = 16,
#dataset_directory = "E:\\NIFTI_train_codegenerator"
dataset_directory = "NIFTI_train_codegenerator"
)
# get a batch of images
train_images,labels = next(iter(train_data_generator))
#validation_split=0.2,
epochs = 100
model.fit(
train_images,
labels,
batch_size=16,
epochs=epochs,
shuffle=True,
verbose=2,
callbacks=[checkpoint_cb, early_stopping_cb],
)
Thank you in advance
CodePudding user response:
Thank you for answering. If I feed it with the train_data_generator directly, I get the same error and even worse.
#For each file in the batch size
for id in batch_IDs:
path = os.path.join(self.directory, id,"la_4ch.nii.gz")
#read the file nifty
image = process_scan(path)
image = np.expand_dims(image, axis = 0)
#append the image and label
images.append(np.array(image))
model.fit(
train_data_generator,
batch_size=16,
epochs=epochs,
shuffle=True,
verbose=2,
callbacks=[checkpoint_cb, early_stopping_cb], )
ValueError: Layer "3dcnn" expects 1 input(s), but it received 16 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, None, None, None) dtype=float32>, <tf.Tensor 'IteratorGetNext:1' shape=(None, None, None, None) dtype=float32>, <tf.Tensor 'IteratorGetNext:2' shape=(None, None, None, None) dtype=float32>, <tf.Tensor 'IteratorGetNext:3' shape=(None, None, None, None) dtype=float32>, <tf.Tensor 'IteratorGetNext:4' shape=(None, None, None, None) dtype=float32>, <tf.Tensor 'IteratorGetNext:5' shape=(None, None, None, None) dtype=float32>, <tf.Tensor 'IteratorGetNext:6' shape=(None, None, None, None) dtype=float32>, <tf.Tensor 'IteratorGetNext:7' shape=(None, None, None, None) dtype=float32>, <tf.Tensor 'IteratorGetNext:8' shape=(None, None, None, None) dtype=float32>, <tf.Tensor 'IteratorGetNext:9' shape=(None, None, None, None) dtype=float32>, <tf.Tensor 'IteratorGetNext:10' shape=(None, None, None, None) dtype=float32>, <tf.Tensor 'IteratorGetNext:11' shape=(None, None, None, None) dtype=float32>, <tf.Tensor 'IteratorGetNext:12' shape=(None, None, None, None) dtype=float32>, <tf.Tensor 'IteratorGetNext:13' shape=(None, None, None, None) dtype=float32>, <tf.Tensor 'IteratorGetNext:14' shape=(None, None, None, None) dtype=float32>, <tf.Tensor 'IteratorGetNext:15' shape=(None, None, None, None) dtype=float32>
CodePudding user response:
The solution would be like this:
#images must be returned as a single tensor otherwise error
#stack makes many inputs in a single one
return np.stack(images,axis=0),labels