• Stefy Lanza (nextime / spora )'s avatar
    Fix GPU detection to only count working, functional GPUs · 056cbbf3
    Stefy Lanza (nextime / spora ) authored
    - Modified detect_gpu_backends() to perform functional tests on GPUs
    - CUDA detection now verifies devices can actually perform tensor operations
    - ROCm detection now tests device functionality before counting
    - Only NVIDIA GPUs are counted for CUDA, and only functional devices
    - Prevents counting of non-working GPUs like old AMD cards misreported as CUDA
    - Example: System with old AMD GPU (device 0) + working CUDA GPU (device 1) now correctly shows only the functional CUDA GPU
    - Total VRAM calculation now reflects only actually usable GPUs
    - Both PyTorch and nvidia-smi/rocm-smi detection paths updated
    056cbbf3
Name
Last commit
Last update
docs Loading commit data...
templates Loading commit data...
vidai Loading commit data...
.gitignore Loading commit data...
AI.PROMPT Loading commit data...
CHANGELOG.md Loading commit data...
Dockerfile.runpod Loading commit data...
LICENSE Loading commit data...
README.md Loading commit data...
TODO.md Loading commit data...
build.bat Loading commit data...
build.sh Loading commit data...
clean.bat Loading commit data...
clean.sh Loading commit data...
create_pod.sh Loading commit data...
image.jpg Loading commit data...
requirements-cuda.txt Loading commit data...
requirements-rocm.txt Loading commit data...
requirements.txt Loading commit data...
setup.bat Loading commit data...
setup.sh Loading commit data...
start.bat Loading commit data...
test_comm.py Loading commit data...
test_runpod.py Loading commit data...
vidai.conf.sample Loading commit data...
vidai.py Loading commit data...
vidai.sh Loading commit data...