Home > other >  How to make 2 tensors the same length by mean | median imputation of the shortest tensor?
How to make 2 tensors the same length by mean | median imputation of the shortest tensor?

Time:11-02

I'm trying to subclass a the base Keras layer to create a layer that will merge the rank 1 output of 2 layers of a skip connection by outputting the Dot product of 2 tensors. The 2 incoming tensors are created by Dense layers parsed by a Neural Architecture Search algorithm that randomly selects the number of Dense units and hence the length of the 2 tensors. These of course will usually not be of the same length. I am trying an experiment to see if casting them to the same length by means of appending the shorter tensor with a mathematically meaningful imputation: [e.g. mean | median | hypotenuse | cos | ... etc] then merging them by means of the dot product will outperform Add or Concatenate merging strategies. To make them the same length:

I try the overall strategy:

  1. Find the shorter tensor.
  2. Pass it to tf.reduce_mean() (aliasing the resulting mean as "rm" for the sake of discussion).
  3. Create a list of [rm for rm in range(['difference in length of the longer tensor and the shorter tensor']). Cast as a tensor if necessary.
  4. [pad | concatenate] the shorter tensor with the result of the operation above to make it equal in length.

Here is where I am running into a dead wall:

Since the tf operation reduce_mean is returning a future with its shape set as None (not assumed to be a scalar of 1), they are in a state of having a shape of '(None,)', which the tf.keras.layers.Dot layer refuses to ingest and throws a ValueError, as it does not see them as being the same length, though they always will be:

KerasTensor(type_spec=TensorSpec(shape=(None,), dtype=tf.float32, name=None), name='tf.math.reduce_mean/Mean:0', description="created by layer 'tf.math.reduce_mean'")

ValueError: A Concatenate layer should be called on a list of at least 1 input. Received: input_shape=[[(None,), (None,)], [(None, 3)]]

My code (in the package/module):

import tensorflow as tf
import numpy as np


class Linear1dDot(tf.keras.layers.Layer):
    def __init__(self, input_dim=None,):
        super(Linear1dDot, self).__init__()

    def __call__(self, inputs):
        max_len = tf.reduce_max(tf.Variable(
            [inp.shape[1] for inp in inputs]))
        print(f"max_len:{max_len}")
        for i in range(len(inputs)):
            inp = inputs[i]
            print(inp.shape)
            inp_lenght = inp.shape[1]
            if inp_lenght < max_len:
                print(f"{inp_lenght} < {max_len}")
                # pad_with = inp.reduce_mean()
                pad_with = tf.reduce_mean(inp, axis=1)
                print(pad_with)
                padding = [pad_with for _ in range(max_len - inp_lenght)]
                inputs[i] = tf.keras.layers.concatenate([padding, [inp]])
                # inputs[i] = tf.reshape(
                # tf.pad(inp, padding, mode="constant"), (None, max_len))

        print(inputs)

        return tf.keras.layers.Dot(axes=1)(inputs)

...

# Alternatively substituting the last few lines with:

                pad_with = tf.reduce_mean(inp, axis=1, keepdims=True)
                print(pad_with)
                padding = tf.keras.layers.concatenate(
                    [pad_with for _ in range(max_len - inp_lenght)])
                inputs[i] = tf.keras.layers.concatenate([padding, [inp]])
                # inputs[i] = tf.reshape(
                # tf.pad(inp, padding, mode="constant"), (None, max_len))

        print(inputs)

        return tf.keras.layers.Dot(axes=1)(inputs)

... and countless other permutations of attempts ...

Does anyone know a workaround or have any advice? (other than 'Don't try to do this.')?

In the parent folder of this module's package ...

Test to simulate a skip connection merging into the current layer:

from linearoneddot.linear_one_d_dot import Linear1dDot
x = tf.constant([1, 2, 3, 4, 5])
y = tf.constant([0, 9, 8])

inp1 = tf.keras.layers.Input(shape=3)
inp2 = tf.keras.layers.Input(shape=5)
xd = tf.keras.layers.Dense(3, "relu")(inp1)
yd = tf.keras.layers.Dense(5, 'elu')(inp2)
combined = Linear1dDot()([xd, yd])  # tf.keras.layers.Dot(axes=1)([xd, yd])

z = tf.keras.layers.Dense(2)(combined)
model = tf.keras.Model(inputs=[inp1, inp2], outputs=z)    # outputs=z)

print(model([x, y]))

print(model([np.random.random((3, 3)), np.random.random((3, 5))]))

Does anyone know a workaround that will be able to get the mean of the shorter rank 1 tensor as a scalar, which I can then append / pad to the shorter tensor to a set intended langth (same length as the longer tensor).

CodePudding user response:

Try this, hope this will work, Try to padd the shortest input with 1, and then concat it with the input then take the dot product, then finally subtract the extra ones which were added in the dot product...

class Linear1dDot(tf.keras.layers.Layer):
    def __init__(self,**kwargs):
        super(Linear1dDot, self).__init__()
    
    def __call__(self, inputs):
        _input1 , _input2  = inputs
        _input1_shape = _input1.shape[1]
        _input2_shape = _input2.shape[1]
        
        difference = tf.math.abs(_input1_shape - _input2_shape)
        padded_input = tf.ones(shape=(1,difference))
        
        if _input1_shape > _input2_shape:
            padded_tensor = tf.concat([_input2 ,padded_input],axis=1)
            scaled_output =  tf.keras.layers.Dot(axes=1)([padded_tensor, _input1])
            scaled_output -= tf.reduce_sum(padded_input)
            return scaled_output
        else:
            padded_tensor = tf.concat([_input1 , padded_input],axis=1)
            scaled_output = tf.keras.layers.Dot(axes=1)([padded_tensor, _input2])
            scaled_output -= tf.reduce_sum(padded_input)
            return scaled_output
x = tf.constant([[1, 2, 3, 4, 5, 9]])
y = tf.constant([[0, 9, 8]])

inp1 = tf.keras.layers.Input(shape=3)
inp2 = tf.keras.layers.Input(shape=5)
xd = tf.keras.layers.Dense(5, "relu")(x)
yd = tf.keras.layers.Dense(3, 'elu')(y)
combined = Linear1dDot()([xd, yd])  # tf.keras.layers.Dot(axes=1)([xd, yd])

Output:

<tf.Tensor: shape=(1, 1), dtype=float32, numpy=array([[4.4694786]], dtype=float32)>
  • Related