Home > Net >  Transfer an LSTM model from cpu to GPU
Transfer an LSTM model from cpu to GPU

Time:05-03

I have a very simple LSTM model which I've built in tensorflow and it works on CPU. However, I want to use this model on GPU. For the pytorch, I've defined the device and etc, however for tensorflow, I don't have any idea why it can not work. Do you have any suggestion for me? Thanks

model = Sequential()
model.add(LSTM(64, activation='relu', input_shape=(X_train.shape[1], X_train.shape[2]), return_sequences=True))
model.add(LSTM(32, activation='relu', return_sequences=False))
model.add(Dropout(0.1))
model.add(Dense(Y_train.shape[1], kernel_regularizer='l2'))
callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=50)
opt = keras.optimizers.Adam(learning_rate=0.0008)
model.compile(optimizer=opt, loss='mse')
#model.summary()
history = model.fit(X_train, Y_train, epochs=2, batch_size=100, validation_data=(X_val, Y_val), callbacks=[callback],verbose=1, device).to(device)

CodePudding user response:

For tensorflow, the models run on GPU for computations by default. It is given on their official documentation.

Is there some kind of error that shows up when you run your model? Because this should work just fine when running on GPU as well instead of a CPU.

  • Related