Home > Enterprise >  Using the U-Net for regression problems
Using the U-Net for regression problems

Time:12-30

I have the following U-Net code for binary segmentation:

x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state=42)
y_train=np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)

# U-Net
def UNet(input_shape):
    inputs = Input(input_shape)
    # Encoding
    conv1 = Conv2D(64, (3, 3), activation='relu', padding='same')(inputs)
    conv1 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv1)
    pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)

    conv2 = Conv2D(128, (3, 3), activation='relu', padding='same')(pool1)
    conv2 = Conv2D(128, (3, 3), activation='relu', padding='same')(conv2)
    pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)

    conv3 = Conv2D(256, (3, 3), activation='relu', padding='same')(pool2)
    conv3 = Conv2D(256, (3, 3), activation='relu', padding='same')(conv3)
    pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)

    conv4 = Conv2D(512, (3, 3), activation='relu', padding='same')(pool3)
    conv4 = Conv2D(512, (3, 3), activation='relu', padding='same')(conv4)
    pool4 = MaxPooling2D(pool_size=(2, 2))(conv4)

    conv5 = Conv2D(1024, (3, 3), activation='relu', padding='same')(pool4)
    conv5 = Conv2D(1024, (3, 3), activation='relu', padding='same')(conv5)

    #Decoding
    up6 = concatenate([Conv2DTranspose(512, (2, 2), strides=(2, 2), padding='same')(conv5), conv4], axis=3)
    conv6 = Conv2D(512, (3, 3), activation='relu', padding='same')(up6)
    conv6 = Conv2D(512, (3, 3), activation='relu', padding='same')(conv6)

    up7 = concatenate([Conv2DTranspose(256, (2, 2), strides=(2, 2), padding='same')(conv6), conv3], axis=3)
    conv7 = Conv2D(256, (3, 3), activation='relu', padding='same')(up7)
    conv7 = Conv2D(256, (3, 3), activation='relu', padding='same')(conv7)

    up8 = concatenate([Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same')(conv7), conv2], axis=3)
    conv8 = Conv2D(128, (3, 3), activation='relu', padding='same')(up8)
    conv8 = Conv2D(128, (3, 3), activation='relu', padding='same')(conv8)

    up9 = concatenate([Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same')(conv8), conv1], axis=3)
    conv9 = Conv2D(64, (3, 3), activation='relu', padding='same')(up9)
    conv9 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv9)

    # The Sigmoid activation is used for having 0 and 1 as outputs.
    conv10 = Conv2D(2, (1, 1), activation='softmax')(conv9)

    model = Model(inputs=[inputs], outputs=[conv10])
    return model

# Compile
input_shape = (512, 512, 3)
model_unet = UNet(input_shape)

model_unet.compile(optimizer = tf.keras.optimizers.Adam(learning_rate = 1e-4),
                   loss= dice_plus_focal_loss, #tf.keras.losses.CategoricalCrossentropy(),
                   metrics=['acc', sm.metrics.IOUScore(threshold=0.5), sm.metrics.FScore(threshold=0.5)],
                   #run_eagerly = True
                   )

model_unet.summary()

# Training
model_unet.fit(x_train, y_train, 
               epochs= 10,
               validation_data=(x_test, y_test),
               batch_size= 1,
               #validation_batch_size=1,
               callbacks = my_callbacks) # validation_split=0.2

I want to use the code for the case that the inputs are 2D grayscale images and the outputs are images (with the same dimension) with the pixel values changing in a range from 0 to 1.

Is it possible to change this code for such a regression problem? How can I do it?

CodePudding user response:

For a regression task, you just have to replace the loss to mean squared error, and use an appropriate activation function in the last layer, linear is a good default:

conv10 = Conv2D(2, (1, 1), activation='linear')(conv9)

model_unet.compile(optimizer=Adam(learning_rate = 1e-4), loss= "mean_squared_error")
  • Related