Home > Back-end >  How to apply Video augmentation with keras preprocessing layers uniformly for all frames in the vide
How to apply Video augmentation with keras preprocessing layers uniformly for all frames in the vide

Time:11-24

I'm trying to apply data augmentation to a video dataset wherein each video is applied with the different augmentations. For example, all frames in video 1 are flipped horizontally and rotated by 10°. All frames in video 2 on the other hand, are not flipped and rotated by -5°. I passed a seed in the preprocessing layers, however, each frames of video 1 are augmented differently. This is what my approach looks:

def data_augment(frames,seed):
    x = tf.keras.layers.CenterCrop(height=1000,width=1200) (frames)
    x = Resizing(width=128,height=128) (x)
    x = Rescaling(1./255) (x)
    x = RandomContrast((0.2,0.2),seed=seed) (x)
    x = RandomTranslation(height_factor=0.15,width_factor=0.2,fill_mode="constant",fill_value=0.0,seed=seed) (x)
    x = RandomFlip("horizontal",seed=seed) (x)
    x = RandomRotation(factor=0.01,fill_mode="constant",seed=seed) (x)
    return x

CodePudding user response:

Video Augmentation:: For videos, combine time and channel axis and treat it as an image augmentation problem. And reshape the end result to get videos augmented same for all frames.

#input dimension:
BATCH, TIME,WIDTH, HEIGHT,_= tf.shape(videos)

Step1: change input shape-(batch, time, width, height, 3) to (batch, width, height, time*3)

#move time to last
videos = tf.transpose(videos, [0, 2, 3, 4, 1])

#combine channels and time
out_shape = (BATCH, WIDTH, HEIGHT, TIME*3)

videos = tf.reshape(videos, out_shape)    

Step2: apply augmentation

augmented_data = data_augment(videos,...)

Step 3: reshape back to original

BATCH, WIDTH, HEIGHT,channels= tf.shape(augmented_data)

augmented_data = tf.reshape(augmented_data, (BATCH, HEIGHT, WIDTH, 3, channels//3))
augmented_data = tf.transpose(augmented_data, [0, 4, 1, 2, 3])
  • Related