-
Stefy Lanza (nextime / spora ) authored
- Updated GPU VRAM detection to use torch.cuda.get_device_properties(i).total_memory / 1024**3 - Same method as used in /api/stats endpoint for consistency - Still filters out non-NVIDIA and non-functional GPUs - Now shows correct VRAM amounts (e.g., 24GB for RTX 3090 instead of hardcoded 8GB) - Fixed both worker-level and node-level GPU detection
efbb77ce