Home > other >  Using pytorch training model, submitted to the GPU memory not mistake
Using pytorch training model, submitted to the GPU memory not mistake

Time:10-25

This is an error message RuntimeError: CUDA out of memory. Tried to the allocate 14.13 MiB (GPU 0; 6.00 GiB total capacity; 356.92 the MiB already allocated; 3.99 GiB free; 6.58 the MiB cached)
Clearly written almost 3 GiB free, why would allocate 14.13 MiB said out of the memory, please answer

CodePudding user response:

Pay attention to the pytorch and cudnn version problem

CodePudding user response:

How to solve, I also encountered the same error

CodePudding user response:

Under the specified graphics try
CUDA_VISIBLE_DEVICES="0" python demo. Py

CodePudding user response:

Reasons for multi-threaded configuration problem, you will num_workers is set to 0 can be solved

CodePudding user response:

Can you tell me the solution? What reason be?

CodePudding user response:

It still didn't solve

CodePudding user response:

Most because, batch_size, just a little
  • Related