I have pyhton 3.8.8 V I installed tha latest version of Cuda & Cudnn:
- Cuda: 11.6.1_511.65
- Cudnn: windows-x86_64-8.3.2.44
Installation completed successfully
I check to validate if installed correctly:
nvidia-smi
NVIDIA-SMI 511.65 Driver Version: 511.65 CUDA Version: 11.6
nvcc -V
Cuda compilation tools, release 11.6, V11.6.112
On spyder I run the following:
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
with the output:
incarnation: 12146292582786704115
xla_global_id: -1
, name: "/device:GPU:0"
device_type: "GPU"
memory_limit: 4185718784
locality {
bus_id: 1
links {
}
}
So I should get the GPU running, no?
when I check to see if it is running I get:
import pytorch as T
device = T.device('cuba:0' if T.cuda.is_available() else 'cpu')
device
device(type='cpu')
This mean that I'm working on the cpu.
Can someone please tell if it is the right configuration to install
If not, please share it (-:
Thanks,
Guy
CodePudding user response: