Home > Enterprise >  building an autoencoder network for parameter predictions
building an autoencoder network for parameter predictions

Time:10-28

I am new to the machine learning domain. I have a 1d signal in the first column and its corresponding frequency, mean_amplitude, and a time is saved in second column of a file: These are the input-output pairs for supervised training i.e. for a tested 1d signal, I need the output frequency, mean_amplitude, and a time.

-0.000000000000000000e 00     5.80000    
-0.000000000000000000e 00     3.11111   
-0.000000000000000000e 00    -1.3666
-0.000000000000000000e 00
-1.366125990000000065e-14
-1.032400010000000034e-13
-6.034000879999999677e-13
-5.719921059999999811e-13
-1.361178959999999947e-12
-9.374413750000000466e-11
-1.666704970000000006e-10
-1.149504050000000062e-09
5.453276159999999863e-10
1.457022949999999906e-09
-5.355599959999999815e-09
-4.683606839999999697e-09
-2.849577019999999957e-09
-1.108899989999999921e-08
-2.849577019999999957e-09
-4.683606839999999697e-09
-5.355599959999999815e-09
1.457022949999999906e-09
5.453276159999999863e-10
-1.149504050000000062e-09
-1.666704970000000006e-10
-9.374413750000000466e-11
-1.361178959999999947e-12
-5.719921059999999811e-13
-6.034000879999999677e-13
-1.032400010000000034e-13
-0.000000000000000000e 00
-0.000000000000000000e 00

In the similar way, I have 1000 of input-output pairs saved in a directory as attached and I want to train an autoencoder network and want the network to predict frequency,mean_amplitude and a time for a new test signal.

In this regard, I need some suggestions, how to give the input to the autoencoder for this kind of input-output pairs.

I found the following code in keras tutorial, but not getting any idea how to implement it for this kind of data. I hope machine learning experts may share some idea.

input = layers.Input(shape=(28, 28, 1))

# Encoder
x = layers.Conv2D(32, (3, 3), activation="relu", padding="same")(input)
x = layers.MaxPooling2D((2, 2), padding="same")(x)
x = layers.Conv2D(32, (3, 3), activation="relu", padding="same")(x)
x = layers.MaxPooling2D((2, 2), padding="same")(x)

# Decoder
x = layers.Conv2DTranspose(32, (3, 3), strides=2, activation="relu", padding="same")(x)
x = layers.Conv2DTranspose(32, (3, 3), strides=2, activation="relu", padding="same")(x)
x = layers.Conv2D(1, (3, 3), activation="sigmoid", padding="same")(x)

# Autoencoder
autoencoder = Model(input, x)
autoencoder.compile(optimizer="adam", loss="binary_crossentropy")
autoencoder.summary()

autoencoder.fit(x=train_data,y=train_data,epochs=50,batch_size=128,shuffle=True,validation_data=(test_data, test_data),)

CodePudding user response:

Here is a simple working model with dummy data as requested:

import tensorflow as tf

signal_input = tf.keras.layers.Input(shape=(1,))
x = tf.keras.layers.Dense(16, activation='relu')(signal_input)
x = tf.keras.layers.Dense(8, activation='relu')(x)
output = tf.keras.layers.Dense(3, activation='linear')(x)

model = tf.keras.models.Model(inputs=signal_input, outputs=output)
model.compile(optimizer='adam',
              loss='MSE')

signals = tf.random.normal((1000,1)) # 1000 signals with 1 value each
labels = tf.random.normal((1000, 3)) # 1000 labels with 3 values for frequency, mean_amplitude, and a time

model.fit(x = signals, y = labels, epochs=5, batch_size=8)

And the output:

Epoch 1/5
32/32 [==============================] - 0s 1ms/step - loss: 1.0087
Epoch 2/5
32/32 [==============================] - 0s 1ms/step - loss: 0.9856
Epoch 3/5
32/32 [==============================] - 0s 1ms/step - loss: 0.9777
Epoch 4/5
32/32 [==============================] - 0s 1ms/step - loss: 0.9747
Epoch 5/5
32/32 [==============================] - 0s 1ms/step - loss: 0.9733
<keras.callbacks.History at 0x7f4d0909f7d0>

This should give you an idea on how you could implement your model for your data.

  • Related