Fix GPU detection to only count working, functional GPUs
- Modified detect_gpu_backends() to perform functional tests on GPUs - CUDA detection now verifies devices can actually perform tensor operations - ROCm detection now tests device functionality before counting - Only NVIDIA GPUs are counted for CUDA, and only functional devices - Prevents counting of non-working GPUs like old AMD cards misreported as CUDA - Example: System with old AMD GPU (device 0) + working CUDA GPU (device 1) now correctly shows only the functional CUDA GPU - Total VRAM calculation now reflects only actually usable GPUs - Both PyTorch and nvidia-smi/rocm-smi detection paths updated
Showing
Please
register
or
sign in
to comment