I have 4GPU(rtx 3090) in one pc.
I used only 1GPU for training and prediction, but now I'm going to use 4GPU.
During training, 4gpu activation was successful, but only 1GPU is active for prediction.
mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():
model = MyModel()
model_checkpoint = ModelCheckpoint(model_path, monitor='loss', verbose=1, save_best_only=True)
history = model.fit(trainData, steps_per_epoch=num_images//batch_size, verbose=1)
It Use 4 GPU
model.predict(testData, verbose=1)
#for x in testData:
# model.predict_on_batch(x)
It use only 1 GPU
How can I use my all GPU?
CodePudding user response:
I solved! changed my test dataset!
I'm using test dataset with generator.
def testGenerator(image_path):
for file in image_path:
img = file
img = img / 255
img = np.reshape(img,img.shape (1,)) if (not flag_multi_class) else img
yield img
I wanted to load the dataset into GPU memory.
So I used "tf.data.Dataset.from_generator" and my GPU worked!
Load saved model
model = tf.keras.models.load_model(model_path)
Make tensorflow dataset
testGene = testGenerator(img_data)
test_dataset = tf.data.Dataset.from_generator(
lambda: testGene,
output_types = tf.float64,
#output_shapes = tf.TensorShape([None]),
output_shapes = tf.TensorShape([512, 512, 1])
).batch(global_batch_size)
result = model.predict(
test_dataset,
verbose=1,
steps=math.ceil(max / global_batch_size),
callbacks=[CustomCallback(root_file_list, int(max / len(root_file_list) / global_batch_size))])
It worked using my all GPU!