Home > other >  Python - Keras: free memory after complete the training
Python - Keras: free memory after complete the training

Time:04-01

I use Keras built an automatic encoder model based on the structure of CNN, after complete the training process, my laptop have 64 gb of memory, but I noticed that at least a third of the memory is still being used, while the GPU memory, too. I didn't find a good way to release the memory, I can only by closing Anaconda command Prompt window and jupyter notebook to free memory. I'm not sure if anyone has good Suggestions. Thank you!

CodePudding user response:

Release the RAM memory
To release the RAM memory, only need to perform as comments @ nuric suggest del variables.
Release the GPU memory
Than the release of the RAM memory is a bit tricky. Some people would suggest that you use the following code (assuming you are using a keras)

The from keras import backend as K
That final lear_session ()

However, the code is not for everyone. (even if you tried del Models, it still doesn't work)
If the above method is not suitable for you, please try the following method (you need to install numba library) :

The from numba import cuda
Cuda. Select_device (0)
Cuda. Close ()

Behind it is the cause of: Tensorflow just for GPU memory allocations, and CUDA is responsible for managing the GPU memory.
If in that final lear_session () to remove all the graphics, CUDA refused to release the GPU memory, in some way you can use the CUDA library directly control CUDA to remove the GPU memory.

CodePudding user response:

reference 1st floor response:

release RAM memoryTo release the RAM memory, only need to perform as comments @ nuric suggest del variables.
Release the GPU memory
Than the release of the RAM memory is a bit tricky. Some people would suggest that you use the following code (assuming you are using a keras)

The from keras import backend as K
That final lear_session ()

However, the code is not for everyone. (even if you tried del Models, it still doesn't work)
If the above method is not suitable for you, please try the following method (you need to install numba library) :

The from numba import cuda
Cuda. Select_device (0)
Cuda. Close ()

Behind it is the cause of: Tensorflow just for GPU memory allocations, and CUDA is responsible for managing the GPU memory.
If in that final lear_session () after clearing all graphics, CUDA refused to release the GPU memory, in some way you can use the CUDA library directly control CUDA to remove the GPU memory.


reference 1st floor response:

release RAM memoryTo release the RAM memory, only need to perform as comments @ nuric suggest del variables.
Release the GPU memory
Than the release of the RAM memory is a bit tricky. Some people would suggest that you use the following code (assuming you are using a keras)

The from keras import backend as K
That final lear_session ()

However, the code is not for everyone. (even if you tried del Models, it still doesn't work)
If the above method is not suitable for you, please try the following method (you need to install numba library) :

The from numba import cuda
Cuda. Select_device (0)
Cuda. Close ()

Behind it is the cause of: Tensorflow just for GPU memory allocations, and CUDA is responsible for managing the GPU memory.
If in that final lear_session () after clearing all graphics, CUDA refused to release the GPU memory, in some way you can use the CUDA library directly control CUDA to remove the GPU memory.





Method 2 is to be able to release the memory, but can't run the program?
  • Related