I have a Tensor
of shape (60, 128, 30000)
. I want to get the value of the argmax
of the 30000
dimension (axis=2
).
This code is an example:
tensor = tf.random.uniform((60, 128, 30000)) # shape (60, 128, 30000)
argmax = tf.argmax(tensor, axis=2) # shape (60, 128) --> max of each 30000
# do something to get every values of 30000
# argmax output (index)
<tf.Tensor: shape=(60, 128), dtype=int64, numpy=
array([[ 3229, 3079, 8360, ..., 1005, 16460, 872],
[17808, 1253, 25476, ..., 16130, 3479, 3479],
[27717, 25429, 18808, ..., 9787, 2603, 24011],
...,
[25429, 25429, 5647, ..., 18451, 12453, 12453],
[ 7361, 13463, 15864, ..., 18839, 12453, 12453],
[ 4750, 25009, 11888, ..., 5647, 1993, 18451]], dtype=int64)>
# Desired output: each values of every index
With argmax
, I get an array of their index, not their values. How can I get an array of same shape (60, 128)
of their values?
CodePudding user response:
You will have to use tf.meshgrid
and tf.gather_nd
to achieve what you want:
tensor = tf.random.uniform((60, 128, 30000)) # shape (60, 128, 30000)
argmax = tf.argmax(tensor, axis=2)
ij = tf.stack(tf.meshgrid(
tf.range(tensor.shape[0], dtype=tf.int64),
tf.range(tensor.shape[1], dtype=tf.int64),
indexing='ij'), axis=-1)
gather_indices = tf.concat([ij, tf.expand_dims(argmax, axis=-1)], axis=-1)
result = tf.gather_nd(tensor, gather_indices)
tf.print(result.shape)
TensorShape([60, 128])
Why is tf.meshgrid
necessary? Because argmax
does contain your indices but in the wrong shape. The function tf.gather_nd
needs to know where exactly it should extract values from the 3D tensor. The tf.meshgrid
function creates a rectangular grid of two one-dimensional arrays representing the tensor indexing of the first and second dimension.
import tensorflow as tf
tensor = tf.random.uniform((2, 5, 3))
argmax = tf.argmax(tensor, axis=2)
# result = tf.gather_nd(tensor, gather_ind) <-- Would not work because arxmax has the shape TensorShape([2, 5]) but TensorShape([2, 5, 3]) is required
tf.print('Input tensor:\n', tensor, tensor.shape, '\nArgmax tensor:\n', argmax, argmax.shape)
i, j = tf.meshgrid(
tf.range(tensor.shape[0], dtype=tf.int64),
tf.range(tensor.shape[1], dtype=tf.int64),
indexing='ij')
# You need to create a mesh grid to correctly index your tensor.
ij = tf.stack([i, j], axis=-1)
tf.print('Meshgrid:\n', i, j, summarize=-1)
tf.print('Stacked:\n', ij, summarize=-1)
gather_indices = tf.concat([ij, tf.expand_dims(argmax, axis=-1)], axis=-1)
tf.print('Gathered indices:\n', gather_indices, gather_indices.shape, summarize=-1)
result = tf.gather_nd(tensor, gather_indices)
tf.print('\nFinal result:\n', result, result.shape)
Input tensor:
[[[0.889752269 0.243187189 0.601408958]
[0.891950965 0.776625633 0.146243811]
[0.136176467 0.743871331 0.762170076]
[0.424416184 0.150568008 0.464055896]
[0.308753 0.0792338848 0.383242]]
[[0.741660118 0.49783361 0.935318112]
[0.0616152287 0.0367363691 0.748341084]
[0.397849679 0.765681744 0.502376914]
[0.750188231 0.304993749 0.733741879]
[0.31267941 0.778184056 0.546301]]] TensorShape([2, 5, 3])
Argmax tensor:
[[0 0 2 2 2]
[2 2 1 0 1]] TensorShape([2, 5])
Meshgrid:
[[0 0 0 0 0]
[1 1 1 1 1]] [[0 1 2 3 4]
[0 1 2 3 4]]
Stacked:
[[[0 0]
[0 1]
[0 2]
[0 3]
[0 4]]
[[1 0]
[1 1]
[1 2]
[1 3]
[1 4]]]
Gathered indices:
[[[0 0 0]
[0 1 0]
[0 2 2]
[0 3 2]
[0 4 2]]
[[1 0 2]
[1 1 2]
[1 2 1]
[1 3 0]
[1 4 1]]] TensorShape([2, 5, 3])
Final result:
[[0.889752269 0.891950965 0.762170076 0.464055896 0.383242]
[0.935318112 0.748341084 0.765681744 0.750188231 0.778184056]] TensorShape([2, 5])
On a side note you could also consider using tf.math.top_k
since you want to get the max values along the last dimension. This function returns the indices and values (which you want):
tensor = tf.random.uniform((60, 128, 30000)) # shape (60, 128, 30000)
values, indices = tf.math.top_k(tensor,
k=1)
tf.print(tf.squeeze(values, axis=-1).shape)
TensorShape([60, 128])