-
Your Name authored
Root cause: The refactored code was hardcoding torch.float16 for CUDA, ignoring the --image-precision bf16 CLI argument. The Z-Image-Turbo model requires bfloat16 precision - using float16 causes NaN values in the image processor, resulting in all-black images. Also restored the original model loading logic with: - GGUF model detection (skip diffusers for GGUF) - OOM retry with progressive memory optimization - use_safetensors=True - Sequential CPU offload support
9b3126d7
| Name |
Last commit
|
Last update |
|---|---|---|
| .vscode | ||
| codai | ||
| .gitignore | ||
| LICENSE.md | ||
| README.md | ||
| build.sh | ||
| coder | ||
| coderai | ||
| requirements-nvidia.txt | ||
| requirements-vulkan.txt | ||
| requirements.txt |