xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)
ys = np.array([5.0, 6.0, 7.0, 8.0, 9.0, 10.0], dtype=float)
and this is the model I've come up with;
model = tf.keras.Sequential([tf.keras.layers.Dense(units=1, input_shape=[1])])
model.compile(optimizer=tf.keras.optimizers.SGD(), loss=tf.keras.losses.MeanSquaredError())
model.fit(xs, ys, epochs=500)
This problem is very similar to this https://github.com/https-deeplearning-ai/tensorflow-1-public/blob/main/C1/W1/ungraded_lab/C1_W1_Lab_1_hello_world_nn.ipynb problem.
CodePudding user response:
To evaluate your model on unseen data, it might be good to do so on the domain of the input training data. You can generate the data using numpy's linspace function (and maybe add a little bit of noise), then you can interpolate the output using original input/outpout data using interp1d (as the relation between x and y is linear, it makes sense) from the scipy package :
from scipy.interpolate import interp1d
# need data
xz = np.linspace(start=-1, stop=4, num=20)
# function between x and y input data using interpolation
f = interp1d(xs, ys)
# interpolated data
yz = f(xz)
model.evaluate(xz, yz)
1/1 [==============================] - 0s 25ms/step - loss: 4.3371e-04