Home > Enterprise >  Negative huge loss in tensorflow
Negative huge loss in tensorflow

Time:05-23

I am trying to predict price values from datasets using keras. I am following this tutorial: https://keras.io/examples/structured_data/structured_data_classification_from_scratch/, but when I get to the part of fitting the model, I am getting a huge negative loss and very small accuracy

Epoch 1/50
1607/1607 [==============================] - ETA: 0s - loss: -117944.7500 - accuracy: 3.8897e-05
2022-05-22 11:14:28.922065: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
1607/1607 [==============================] - 15s 10ms/step - loss: -117944.7500 - accuracy: 3.8897e-05 - val_loss: -123246.0547 - val_accuracy: 7.7791e-05
Epoch 2/50
1607/1607 [==============================] - 15s 9ms/step - loss: -117944.7734 - accuracy: 3.8897e-05 - val_loss: -123246.0547 - val_accuracy: 7.7791e-05
Epoch 3/50
1607/1607 [==============================] - 15s 10ms/step - loss: -117939.4844 - accuracy: 3.8897e-05 - val_loss: -123245.9922 - val_accuracy: 7.7791e-05
Epoch 4/50
1607/1607 [==============================] - 16s 10ms/step - loss: -117944.0859 - accuracy: 3.8897e-05 - val_loss: -123245.9844 - val_accuracy: 7.7791e-05
Epoch 5/50
1607/1607 [==============================] - 15s 10ms/step - loss: -117944.7422 - accuracy: 3.8897e-05 - val_loss: -123246.0547 - val_accuracy: 7.7791e-05
Epoch 6/50
1607/1607 [==============================] - 15s 10ms/step - loss: -117944.8203 - accuracy: 3.8897e-05 - val_loss: -123245.9766 - val_accuracy: 7.7791e-05
Epoch 7/50
1607/1607 [==============================] - 15s 10ms/step - loss: -117944.8047 - accuracy: 3.8897e-05 - val_loss: -123246.0234 - val_accuracy: 7.7791e-05
Epoch 8/50
1607/1607 [==============================] - 15s 10ms/step - loss: -117944.7578 - accuracy: 3.8897e-05 - val_loss: -123245.9766 - val_accuracy: 7.7791e-05
Epoch 9/50

This is my graph, as far as the code, it looks like the one from the example but adapted:

# Categorical feature encoded as string
desc = keras.Input(shape=(1,), name="desc", dtype="string")

# Numerical features
date = keras.Input(shape=(1,), name="date")
quant = keras.Input(shape=(1,), name="quant")

all_inputs = [
    desc,
    quant,
    date,
]

# String categorical features
desc_encoded = encode_categorical_feature(desc, "desc", train_ds)

# Numerical features
quant_encoded = encode_numerical_feature(quant, "quant", train_ds)
date_encoded = encode_numerical_feature(date, "date", train_ds)

all_features = layers.concatenate(
    [
        desc_encoded,
        quant_encoded,
        date_encoded,
    ]
)
x = layers.Dense(32, activation="sigmoid")(all_features)
x = layers.Dropout(0.5)(x)
output = layers.Dense(1, activation="relu")(x)
model = keras.Model(all_inputs, output)
model.compile("adam", "binary_crossentropy", metrics=["accuracy"])

And the dataset looks like this:

date    desc    quant   price
0   20140101.0  CARBONATO DE DIMETILO   999.00  1428.57
1   20140101.0  HIDROQUINONA    137.00  1314.82
2   20140101.0  1,5 PENTANODIOL TECN.   495.00  2811.60
3   20140101.0  SOSA CAUSTICA LIQUIDA 50%   567160.61   113109.14
4   20140101.0  BOROHIDRURO SODICO  6.24    299.27

Also I am converting the date from being YYYY-MM-DD to being numbers using:

dataset['date'] = pd.to_datetime(dataset["date"]).dt.strftime("%Y%m%d").astype('float64')

What am I doing wrong? :(

EDIT: I though the encoder function from the tutorial was normalizing data, but it wasnt. Is there any other tutorial that you know guys which can guide me better? The loss problem has been fixed ! (was due to normalization)

CodePudding user response:

You seem to be quite confused by the components of your model.

  1. Binary cross entropy is a classification loss, your problem is regression -> use MSE. Also "accuracy" makes no sense for regression, change it to MSE too.
  2. You data is huge and thus your loss is huge. You have a price of 113109.14 in the data, what if your model is bad initially and says 0? You get a loss of ~100,000^2 = 10,000,000,000. Normalise your data, in your case - the output variable (target, price) to in between -1 and 1
  3. There are some use cases where an output neuron should have an activation function, but unless you know why you are doing this, leaving it as a linear is a much safer choice.
  4. Dropout is a method for regularising your model, do not start with having it, always start with the simplest possible model, and make sure you can learn before trying to maximise test score.
  5. Neural networks will not extrapolate, feeding in an ever growing signal (date) in a raw format almost surely will cause problems.
  • Related