When using numpy.random
, we are able to generate multiple random integer numbers with different upper limits . I was wondering if the same is possible when using tf.random.uniform
. For example, If I want to generate two integers where they are bounded above by 5 and 4, what I can do:
import numpy as np
import random
import tensorflow as tf
np.random.randint([5,4])
array([0, 2])
However, the same is not working when I use tensorflow
. This is because minval
and maxval
must be a single number. I don't want to use a for loop, because I know that it will slow down the training process. What are some alternatives here if any exist?
tf.random.uniform([1,2],minval=[1,1], maxval=[5,4], dtype=tf.int32)
EDIT:
Time comparison:
import time
start = time.time()
lim =np.random.randint(1, 10000, size=500000)
x = np.random.randint(lim)
print(x.shape)
print("time: ", time.time()-start)
(500000,)
time: 0.03663229942321777
Generating 500000 numbers took 0.03 seconds with random.randint
. If I use tf.experimental.numpy.random.randint
, generating only 15 numbers took the same amount of time.
l = tf.convert_to_tensor(np.random.randint(1, 2, size=15), tf.int32)
h = tf.convert_to_tensor(np.random.randint(2, 10000, size=15), tf.int32)
bounds = tf.stack([l, h], axis=1)
start = time.time()
z = tf.map_fn(fn=lambda x: tf.experimental.numpy.random.randint(low=x[0], high=x[1]), elems=bounds)
print(tf.shape(z))
print("time: ", time.time()-start)
tf.Tensor([15], shape=(1,), dtype=int32)
time: 0.03790450096130371
CodePudding user response:
You could define a tensor with the list of lower and upper bounds of each sample, and then use tf.map_fn
to generate the random numbers with tf.experimental.numpy.random.randint
.
import tensorflow as tf
bounds = tf.constant([
[1, 5],
[1, 4],
], dtype=tf.int64)
tf.map_fn(fn=lambda x: tf.experimental.numpy.random.randint(low=x[0], high=x[1]), elems=bounds)
To speed up the code you can set the parallel_iterations
of tf.map_fn
to a value greater than 1, see the documentation.
CodePudding user response:
minval = tf.constant([1, 1], dtype=tf.float32)
maxval = tf.constant([5, 4], dtype=tf.float32)
for _ in range(10):
random_int = tf.cast(tf.random.uniform(shape=tf.shape(minval)) * (maxval - minval) minval, dtype=tf.int32)
print(random_int)
# tf.Tensor([3 1], shape=(2,), dtype=int32)
# tf.Tensor([2 3], shape=(2,), dtype=int32)
# tf.Tensor([4 2], shape=(2,), dtype=int32)
# tf.Tensor([1 1], shape=(2,), dtype=int32)
# tf.Tensor([2 3], shape=(2,), dtype=int32)
# tf.Tensor([4 3], shape=(2,), dtype=int32)
# tf.Tensor([1 1], shape=(2,), dtype=int32)
# tf.Tensor([1 1], shape=(2,), dtype=int32)
# tf.Tensor([4 2], shape=(2,), dtype=int32)
# tf.Tensor([3 3], shape=(2,), dtype=int32)
Using tf.map_fn
from Flavia Giammarino
solution is a universal approach but may be suboptimal in performance.
CodePudding user response:
What you can also try, if you really do not want to use a loop or tf.map_fn
, is to create a large uniform random tensor and sample from it with the bounds that you need every time.
import tensorflow as tf
random_numbers = tf.random.uniform([500, 1], minval=1, maxval=5, dtype=tf.int32)
samples = 10
random_samples = tf.gather(random_numbers, tf.random.uniform([samples], maxval=random_numbers.shape[0], dtype=tf.int32), axis=0)
lower, upper = 1, 4
tensors = tf.gather_nd(random_samples, tf.where(tf.logical_and(tf.greater(random_samples, lower), tf.less(random_samples, upper))))
print(tensors)
# ... and so on
tf.Tensor([2 3 2 2 2 2 2], shape=(7,), dtype=int32)