I have a dataset that includes video frames partially 1000 real videos and 1000 deep fake videos. each video after preprocessing phase converted to the 300 frames in other worlds I have a dataset with 300000 images with Real(0) label and 300000 images with Fake(1) label. I want to train MesoNet with this data. I used costum DataGenerator class to handle train, validation, test data with 0.8,0.1,0.1 ratios but when I run the project show this message:
Filling up shuffle buffer (this may take a while):
What can I do to solve this problem?
You can see the DataGenerator class below.
class DataGenerator(keras.utils.Sequence):
'Generates data for Keras'
def __init__(self, df, labels, batch_size =32, img_size = (224,224),
n_classes = 2, shuffle=True):
'Initialization'
self.batch_size = batch_size
self.labels = labels
self.df = df
self.img_size = img_size
self.n_classes = n_classes
self.shuffle = shuffle
self.batch_labels = []
self.batch_names = []
self.on_epoch_end()
def __len__(self):
'Denotes the number of batches per epoch'
return int(np.floor(len(self.df) / self.batch_size))
def __getitem__(self, index):
batch_index = self.indexes[index * self.batch_size : (index 1) * self.batch_size]
frame_paths = self.df.iloc[batch_index]["framePath"].values
frame_label = self.df.iloc[batch_index]["label"].values
imgs = [cv2.imread(frame) for frame in frame_paths]
imgs = [cv2.cvtColor(img, cv2.COLOR_BGR2RGB) for img in imgs]
imgs = [
cv2.resize(img, self.img_size) for img in imgs if img.shape != self.img_size
]
batch_imgs = np.asarray(imgs)
labels = list(map(int, frame_label))
y = np.array(labels)
self.batch_labels.extend(labels)
self.batch_names.extend([str(frame).split("\\")[-1] for frame in frame_paths])
return (
batch_imgs,y
)
def on_epoch_end(self):
'Updates indexes after each epoch'
self.indexes = np.arange(len(self.df))
if self.shuffle == True:
np.random.shuffle(self.indexes)
CodePudding user response:
Note that this is not an error, but a log message: https://github.com/tensorflow/tensorflow/blob/42b5da6659a75bfac77fa81e7242ddb5be1a576a/tensorflow/core/kernels/data/shuffle_dataset_op.cc#L138
It seems you may be choosing too large a dataset if it's taking too long: https://github.com/tensorflow/tensorflow/issues/30646
You can address this by lowering your buffer size: https://support.huawei.com/enterprise/en/doc/EDOC1100164821/2610406b/what-do-i-do-if-training-times-out-due-to-too-many-dataset-shuffle-operations