Home > Net >  How to join two tf.data.Dataset tensor slices?
How to join two tf.data.Dataset tensor slices?

Time:10-09

I have one tensor slice with all image and one tensor with its masking image. how do i combine/join/add them and make it a single tensor dataset tf.data.dataset

# turning them into tensor data
val_img_data = tf.data.Dataset.from_tensor_slices(np.array(all_val_img))
val_mask_data = tf.data.Dataset.from_tensor_slices(np.array(all_val_mask))

then i mapped a function to paths to make them image

val_img_tensor = val_img_data.map(get_image)
val_mask_tensor = val_mask_data.map(get_image)

So now i have two tensors one image and other mask. how do i join them and make it a tensor data combined?

I tried zipping them: it didn't work.

val_data = tf.data.Dataset.from_tensor_slices(zip(val_img_tensor, val_mask_tensor))

Error

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/data/util/structure.py in normalize_element(element, element_signature)
    101         if spec is None:
--> 102           spec = type_spec_from_value(t, use_fallback=False)
    103       except TypeError:

11 frames
TypeError: Could not build a `TypeSpec` for <zip object at 0x7f08f3862050> with type zip

During handling of the above exception, another exception occurred:

ValueError                                Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/constant_op.py in convert_to_eager_tensor(value, ctx, dtype)
    100       dtype = dtypes.as_dtype(dtype).as_datatype_enum
    101   ctx.ensure_initialized()
--> 102   return ops.EagerTensor(value, ctx.device_name, dtype)
    103 
    104 

ValueError: Attempt to convert a value (<zip object at 0x7f08f3862050>) with an unsupported type (<class 'zip'>) to a Tensor.

CodePudding user response:

Maybe try tf.data.Dataset.zip:

val_data = tf.data.Dataset.zip((val_img_tensor, val_mask_tensor))

CodePudding user response:

The comment of Djinn is mostly you need to follow. Here is the end to end answer. Here is how you can build data pipeline for segmentation model training, generally a training paris with both images, masks.

First, get the sample paths.

images = [
        1.jpg,
        2.jpg,
        3.jpg, ...
]

masks = [
       1.png,
       2.png,
       3.png, ...
]

Second, define the hyper-params i.e image size, batch size etc. And build the tf.data API input pipelines.

IMAGE_SIZE = 128
BATCH_SIZE = 86

def read_image(image_path, mask=False):
    image = tf.io.read_file(image_path)
    
    if mask:
        image = tf.image.decode_png(image, channels=1)
        image.set_shape([None, None, 1])
        image = tf.image.resize(images=image, size=[IMAGE_SIZE, IMAGE_SIZE])
        image = tf.cast(image, tf.int32)
    else:
        image = tf.image.decode_png(image, channels=3)
        image.set_shape([None, None, 3])
        image = tf.image.resize(images=image, size=[IMAGE_SIZE, IMAGE_SIZE])
        image = image / 255.
        
    return image

def load_data(image_list, mask_list):
    image = read_image(image_list)
    mask  = read_image(mask_list, mask=True)
    return image, mask

def data_generator(image_list, mask_list, split='train'):
    dataset = tf.data.Dataset.from_tensor_slices((image_list, mask_list))
    dataset = dataset.shuffle(8*BATCH_SIZE) if split == 'train' else dataset 
    dataset = dataset.map(load_data, num_parallel_calls=tf.data.AUTOTUNE)
    dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
    dataset = dataset.prefetch(tf.data.AUTOTUNE)
    return dataset

Lastly, pass the list of images paths (image mask) to build data generator.

train_dataset = data_generator(images, masks)
image, mask = next(iter(train_dataset.take(1))) 

print(image.shape, mask.shape)
(86, 128, 128, 3) (86, 128, 128, 1)

Here you can see that, the tf.data.Dataset.from_tensor_slices successfully load the training pairs and return as tuple (no need zipping). Hope it will resolve your problem. I've also answered your other query regarding augmentaiton pipelines, HERE. To add more, check out the following resources, I've shared plenty of semantic segmentaiton modeling approach. It may help.

  • Related