fix: Calculate 25% more VRAM for base models with weights/LoRAs
Instead of adding a fixed 2GB overhead, now calculates 25% more VRAM for base models that will have fine-tuned weights/tensors or LoRA adapters loaded on top.
Showing
Please
register
or
sign in
to comment