Starting at epoch 0. LR=0.001
Checkpoint Path: D: \ daima \ Mask_RCNN - master \ logs \ shapes20190515T2105 \ mask_rcnn_shapes_ {} epoch: 4 D. The h5
Selecting the layers to "train"
Fpn_c5p5 (Conv2D)
Fpn_c4p4 (Conv2D)
Fpn_c3p3 (Conv2D)
Fpn_c2p2 (Conv2D)
Fpn_p5 (Conv2D)
Fpn_p2 (Conv2D)
Fpn_p3 (Conv2D)
Fpn_p4 (Conv2D)
In the model: rpn_model
Rpn_conv_shared (Conv2D)
Rpn_class_raw (Conv2D)
Rpn_bbox_pred (Conv2D)
Mrcnn_mask_conv1 (TimeDistributed)
Mrcnn_mask_bn1 (TimeDistributed)
Mrcnn_mask_conv2 (TimeDistributed)
Mrcnn_mask_bn2 (TimeDistributed)
Mrcnn_class_conv1 (TimeDistributed)
Mrcnn_class_bn1 (TimeDistributed)
Mrcnn_mask_conv3 (TimeDistributed)
Mrcnn_mask_bn3 (TimeDistributed)
Mrcnn_class_conv2 (TimeDistributed)
Mrcnn_class_bn2 (TimeDistributed)
Mrcnn_mask_conv4 (TimeDistributed)
Mrcnn_mask_bn4 (TimeDistributed)
Mrcnn_bbox_fc (TimeDistributed)
Mrcnn_mask_deconv (TimeDistributed)
Mrcnn_class_logits (TimeDistributed)
Mrcnn_mask (TimeDistributed)
C: \ Users \ \ Anaconda3 \ envs 86182 \ tenrorflow1 \ lib \ site - packages \ tensorflow \ python \ ops \ gradients_impl py: 112: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape."
Epoch 1/10
Here can't show the change of loss value, I watched the GPU, found the GPU is full, it should be is training,
I put the multi-threaded into a single thread, found that the problem is still there,
Always can't stop, and training, can help have a look at, what reason is this?
How to print out the loss value,
CodePudding user response:
If your batch_size is too big, MRCNN/config. The words in the py:# Number of images to train with on each GPU. A 12 gb GPU can typically
# handle 2 images of 1024 x1024px.
# Adjust on your GPU memory and image sizes. Use the highest
# number that your GPU can handle the for best performance.