• Stefy Lanza (nextime / spora )'s avatar
    Fix GPU VRAM detection to use correct method from /api/stats · efbb77ce
    Stefy Lanza (nextime / spora ) authored
    - Updated GPU VRAM detection to use torch.cuda.get_device_properties(i).total_memory / 1024**3
    - Same method as used in /api/stats endpoint for consistency
    - Still filters out non-NVIDIA and non-functional GPUs
    - Now shows correct VRAM amounts (e.g., 24GB for RTX 3090 instead of hardcoded 8GB)
    - Fixed both worker-level and node-level GPU detection
    efbb77ce
Name
Last commit
Last update
docs Loading commit data...
templates Loading commit data...
vidai Loading commit data...
.gitignore Loading commit data...
AI.PROMPT Loading commit data...
CHANGELOG.md Loading commit data...
Dockerfile.runpod Loading commit data...
LICENSE Loading commit data...
README.md Loading commit data...
TODO.md Loading commit data...
build.bat Loading commit data...
build.sh Loading commit data...
clean.bat Loading commit data...
clean.sh Loading commit data...
create_pod.sh Loading commit data...
image.jpg Loading commit data...
requirements-cuda.txt Loading commit data...
requirements-rocm.txt Loading commit data...
requirements.txt Loading commit data...
setup.bat Loading commit data...
setup.sh Loading commit data...
start.bat Loading commit data...
test_comm.py Loading commit data...
test_runpod.py Loading commit data...
vidai.conf.sample Loading commit data...
vidai.py Loading commit data...
vidai.sh Loading commit data...