I want to make a prediction of the values using Tensorflow. In model.fit, it trains and predicts the data. Physically, all the predicted data should be larger than zero. However, when checking the value in model.predict, I found that some of the predicted values were smaller than zero. So, my question is how to have a conditional restriction in the model.fit or before model.fit. Then, the negative value could be eliminated in the fitting process. Many thanks again.
#(The code is as follows. )
layer_number=36
model = keras.models.Sequential()
model.add(keras.layers.Dense(layer_number, activation='relu', input_shape=(shp[1]-1,)))
model.add(keras.layers.Dense(layer_number, activation='relu'))
model.add(keras.layers.Dense(layer_number, activation='relu'))
model.add(keras.layers.Dense(1))
model.compile(optimizer='adam', loss='mean_squared_error')
history=model.fit(X, Y, epochs=4000,batch_size=16,verbose=0, validation_split=0.2)
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
print(hist.tail())
#( here it predicts the value. )
predict_value=np.zeros(shp[0])
i=0
while (i<shp[0]):
test_data = mc1[i,0:shp[1]-1]
a=model.predict(test_data.reshape(1,shp[1]-1), batch_size=1)
predict_value[i]=a[0,0]
i=i 1
CodePudding user response:
Conditional training is possible by values feeding conditions, selecting, extending, remarks, conditional rules, learning transferred, or additional tasks.
You may see that the automatic part works with only coefficients and you also can create conditional rules for working or when input data such as HP, Scores, Number targets, Time in stages, or those of combo actions as we played the Street fighters.
[ Pre-data and training ]:
Var_1 = player_y_array - ( player_y_array / next_pipe_top_y_array ) gap - coefficient ( step * reward * 10 )
Var_2 = player_y_array - ( player_y_array / next_next_pipe_top_y_array ) gap coefficient ( step * reward * 10 )
[ Short-long term Pre-data and training ]: Somebody told it as the Eager in Pacific Rims is because it create the data assynchronized ( name for the scientists assynchronized data training )
if episode_frame_number % 8 == 0 :
## _ = os.system('cls')
IMAGE_1 = line_object_list(observation)
DATA_1 = adding_array_DATA( DATA_1, IMAGE_1, reward, info, steps, n_numpointer, n_roles )
if episode_frame_number % 24 == 0 :
DATA_2 = adding_array_DATA( DATA_2, IMAGE_1, reward, info, steps, n_numpointer, n_roles )
action_2 = predict_action(DATA_2)
if ( reward != 0 or lives != info['lives'] or steps % ( n_roles - 2 ) == 0 ) and not bPlay :
dataset = tf.data.Dataset.from_tensor_slices((tf.constant([DATA_1, DATA_2], dtype=tf.float32),tf.constant([action_1, action_2], shape=(2, 1), dtype=tf.int32)))
batched_features = dataset
history = model.fit(batched_features, epochs=50 ,validation_data=(batched_features), callbacks=[custom_callback])
model.save_weights(checkpoint_path)
i_count_list = len(history.history['loss']) - 1
if ( history.history['loss'][i_count_list] <= 0.01 ) :
action_1 = random_action(action_1)
dataset = tf.data.Dataset.from_tensor_slices((tf.constant([DATA_1, DATA_2], dtype=tf.float32),tf.constant([action_1, action_2], shape=(2, 1), dtype=tf.int32)))
batched_features = dataset
history = model.fit(batched_features, epochs=50 ,validation_data=(batched_features), callbacks=[custom_callback])
if lives != info['lives'] :
steps = 0
episode_frame_number = info['episode_frame_number']
lives = info['lives']
frame_number = info['frame_number']
[ Sample ]: Usages
temp = tf.random.normal([10], 1, 0.2, tf.float32)
temp = np.asarray(temp) * np.asarray([ coefficient_0, coefficient_1, coefficient_2, coefficient_3, coefficient_4, coefficient_5, coefficient_6, coefficient_7, coefficient_8, coefficient_9 ])
temp = tf.nn.softmax(temp)
[ Output ]:
CodePudding user response:
Many thanks for the suggestions. When evaluating the values, I found that some predicted values are smaller than zero. It means that those values are nonphysical. I am wondering if I could define a loss function as follows. In this way, the nonphysical values could be ignored. The pseudo code is as follows. Of course, it does not work. May I know the correct one, please?
def custom_loss(y_actual,y_pred):
custom_loss=kb.square(y_actual-y_pred)
if(y_pred<0):
custom_loss=100000
return custom_loss