When trying to follow along F. Chollet's "Deep Learning with Python" listing 6.40 I encounter this warning:
WARNING:tensorflow:Layer gru_4 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
after executing the following code:
input_tensor = layers.Input((None, float_data.shape[-1])) kmodel = layers.GRU(32, dropout=0.2, recurrent_dropout=0.2)(input_tensor) output_tensor = layers.Dense(1)(kmodel) model = models.Model(input_tensor, output_tensor)
My imports are:
import os import numpy as np import matplotlib.pyplot as plt from typing import Tuple from tensorflow.keras import models, layers from tensorflow.keras.optimizers import RMSprop
Note that if I don't use
recurrent_dropout in the
GRU layer everything works fine and fast. In the case I do use dropout like in the code above, it still works but with very slow performance.
System information: Python 3.7.7 tensorflow-gpu 2.2.0 GPU: Cuda compilation tools, release 10.1, V10.1.243 on GeForce RTX 2080 Ti 11016MiB OS: Ubuntu 18.04.4 LTS
The link you have provided just reconfirms that the problem is with the
recurrent_dropout argument. The requirement is to set it to 0, i.e., not using it. I think this should be implemented in the TF backend since it is an important option that highly affects performance (training time).