Fix GPU VRAM detection to count only available GPUs
- Modified local node GPU memory calculation to only count GPUs that are actually available for supported backends - Previously counted all GPUs in system, now only counts CUDA GPUs if CUDA is available and ROCm GPUs if ROCm is available - Fixes issue where unsupported GPUs (like old AMD GPUs without ROCm support) were incorrectly included in VRAM totals - Example: System with old AMD GPU (8GB, no ROCm) and CUDA GPU (24GB) now correctly shows 24GB total instead of 32GB - Ensures accurate GPU resource reporting in cluster nodes interface
Showing
Please
register
or
sign in
to comment