• Stefy Lanza (nextime / spora )'s avatar
    Fix GPU VRAM detection to count only available GPUs · ffe34516
    Stefy Lanza (nextime / spora ) authored
    - Modified local node GPU memory calculation to only count GPUs that are actually available for supported backends
    - Previously counted all GPUs in system, now only counts CUDA GPUs if CUDA is available and ROCm GPUs if ROCm is available
    - Fixes issue where unsupported GPUs (like old AMD GPUs without ROCm support) were incorrectly included in VRAM totals
    - Example: System with old AMD GPU (8GB, no ROCm) and CUDA GPU (24GB) now correctly shows 24GB total instead of 32GB
    - Ensures accurate GPU resource reporting in cluster nodes interface
    ffe34516
Name
Last commit
Last update
docs Loading commit data...
templates Loading commit data...
vidai Loading commit data...
.gitignore Loading commit data...
AI.PROMPT Loading commit data...
CHANGELOG.md Loading commit data...
Dockerfile.runpod Loading commit data...
LICENSE Loading commit data...
README.md Loading commit data...
TODO.md Loading commit data...
build.bat Loading commit data...
build.sh Loading commit data...
clean.bat Loading commit data...
clean.sh Loading commit data...
create_pod.sh Loading commit data...
image.jpg Loading commit data...
requirements-cuda.txt Loading commit data...
requirements-rocm.txt Loading commit data...
requirements.txt Loading commit data...
setup.bat Loading commit data...
setup.sh Loading commit data...
start.bat Loading commit data...
test_comm.py Loading commit data...
test_runpod.py Loading commit data...
vidai.conf.sample Loading commit data...
vidai.py Loading commit data...
vidai.sh Loading commit data...