I get confused by a array slicing notation.
def hypernetwork(self, inputs):
x = self.fc(inputs)
return x[..., :self.channels], x[..., self.channels:]
what is the return? what does ..., means? the self.channels
is defined as the number of channels of input. I think x is just the input feature block. Below is the relevant code for self.fc
and self.channels
def build(self, input_shape):
self.channels = input_shape[0][-1] # input_shape: [x, z].
self.fc = KL.Dense(
int(2 * self.channels),
kernel_initializer=self.init,
kernel_regularizer=tf.keras.regularizers.l2(
l=self.wt_decay,
),
bias_regularizer=tf.keras.regularizers.l2(
l=self.wt_decay,
),
)
CodePudding user response:
...
means all elements of all dimensions until you start explicitely referencing, which you do with the :self.channels
.
So in conclusion if x
is e.g. a 10x4x6 array and self.channels
is 4, the output will be one 10x4x4 array and one 10x4x2 array.
If x
is 10x6 and self.channels
is 2, you'll get a 10x2 and a 10x4 array. You're splitting the array along the last dimension.
CodePudding user response:
You are referring to all the dimensions except the last one (which you are slicing) when using ...
. It is equivalent to the other notation x[:, :, :channels]
:
import tensorflow as tf
tf.random.set_seed(111)
channels = 2
x = tf.random.normal((1, 2, 3))
print(x)
print(x[..., :channels], x[:, :, :channels]) # Equivalent
print(x[..., channels:], x[:, :, channels:]) # Equivalent
tf.Tensor(
[[[ 0.7558127 1.5447265 1.6315602 ]
[-0.19868968 0.08828261 0.01711658]]], shape=(1, 2, 3), dtype=float32)
tf.Tensor(
[[[ 0.7558127 1.5447265 ]
[-0.19868968 0.08828261]]], shape=(1, 2, 2), dtype=float32) tf.Tensor(
[[[ 0.7558127 1.5447265 ]
[-0.19868968 0.08828261]]], shape=(1, 2, 2), dtype=float32)
tf.Tensor(
[[[1.6315602 ]
[0.01711658]]], shape=(1, 2, 1), dtype=float32) tf.Tensor(
[[[1.6315602 ]
[0.01711658]]], shape=(1, 2, 1), dtype=float32)