I am attempting to train an LSTM generative model by using the cuDNN kernel to speed up the process, yet it seems that my model falls outside of the criteria. I am having trouble understanding what is the issue exactly.
Here is the warning:
WARNING:tensorflow:Layer lstm will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
And here is my generative model:
def build_generative_model(vocab_size, embed_dim, lstm_units, lstm_layers, batch_size, dropout=0):
model = tf.keras.Sequential()
model.add(tf.keras.layers.Embedding(vocab_size, embed_dim, batch_input_shape=[batch_size, None]))
for i in range(max(1, lstm_layers)):
model.add(tf.keras.layers.LSTM(lstm_units, return_sequences=True, stateful=True, dropout=dropout, recurrent_dropout=dropout))
model.add(tf.keras.layers.Dense(vocab_size))
return model
CodePudding user response:
I fixed it, I feel so fing stupid hahaha. For anyone having this issue, look over the requirements: https://www.tensorflow.org/api_docs/python/tf/keras/layers/LSTM Go slowly through each one and set your layers to the appropriate values, in my case
recurrent_dropout = 0
instead of
recurrent_dropout = 0.0