def dice_loss (y_true y_pred, smooth=1) :
Y_true_f=K.f latten (y_true)
Y_pred_f=K.f latten (y_pred)
Intersection computes=Keith um (y_true_f * y_pred_f)
Loss +=(2 * intersection computes + smooth)/(Keith um (y_true_f) + Keith um (y_pred_f) + smooth)
1 - return loss
The strange thing is, when I change the code to the
def dice_loss (y_true y_pred, smooth=1) :
Y_true_f=K.f latten (y_true [...] 1)
Y_pred_f=K.f latten (y_pred [...] 1)
Intersection computes=Keith um (y_true_f * y_pred_f)
Loss +=(2 * intersection computes + smooth)/(Keith um (y_true_f) + Keith um (y_pred_f) + smooth)
1 - return loss
Iou index will rise in normal again, this time, there are bosses know what's the difference between the two calculation formula? The second code can represent binary classification dice_loss?