- 14 Mar, 2026 22 commits
-
-
Your Name authored
- Fix missing indentation in async with semaphore block - Fix invalid elif syntax in load_mode determination - Fix request.steps reference (field doesn't exist in request model)
-
Your Name authored
-
Your Name authored
- Without --loadall: serialize all requests (one at a time) - With --loadall: allow one concurrent request per model
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
These parameters are not available in the installed version of stable-diffusion-cpp-python. The engine will auto-detect the architecture from file headers.
-
Your Name authored
- Added is_huggingface_model_id() helper to detect HF model IDs - Added download_huggingface_model() to download from HF Hub - Updated download logic to handle model IDs, URLs, and local paths
-
Your Name authored
- Renamed CLI argument from --clip-l-path to --llm-path - Updated all references from args.clip_l_path to args.llm_path - Changed config dictionary key from 'clip_l_path' to 'llm_path' - Added model_type='z-image' and backend='vulkan' parameters
-
Your Name authored
The bug was that the code checked 'if clip_l_path:' but clip_l_path was just set to None above, so the download block was always skipped. Changed to check 'if args.clip_l_path:' to properly detect when the --clip-l-path CLI argument is provided and trigger the download/caching.
-
Your Name authored
- Changed incorrect 'elif clip_l_path:' to proper 'if clip_l_path:' logic - Fixed vae_path assignment to use args.vae_path instead of undefined vae_path variable - This ensures CLIP LLM and VAE paths are properly passed when using --clip-l-path and --vae-path CLI arguments with Vulkan backend
-
Your Name authored
These are txt2img() parameters, not constructor parameters
-
Your Name authored
When a URL is passed to --clip-l-path or --vae-path, the model is now automatically downloaded and cached.
-
Your Name authored
- Added --clip-l-path for specifying CLIP LLM model path - Added --vae-path for specifying VAE model path - Added --image-sample-method (default: res_multistep for Z-Image Turbo) - Added --image-steps (default: 4 for Z-Image Turbo) - Added --image-width (default: 512) - Added --image-height (default: 512) - Added --image-cfg-scale (default: 1.0 for Z-Image Turbo)
-
- 10 Mar, 2026 18 commits
-
-
Your Name authored
- When --debug is enabled, show full command line coderai was called with - Fixed GGUF image model key to use cached file path instead of URL (lines 4565 and 5124 now use model_path) - Removed redundant model_key assignment before model_path resolution
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
When GGUF image model fails to load with llama.cpp, try loading with stable-diffusion-cpp-python (sd.cpp) as fallback.
-
Your Name authored
-
Your Name authored
-
Your Name authored
Tell user to update llama.cpp instead of falling back to diffusers.
-
Your Name authored
If llama.cpp fails to load a GGUF image model (e.g., unsupported architecture like lumina2), try loading via diffusers instead.
-
Your Name authored
GGUF files can have different version bytes after 'GGUF' (e.g., GGUF\x03 for version 3). Changed magic byte check from exact match to prefix check.
-
Your Name authored
Changed condition from 'only if no audio models' to 'always when image_models is configured'. This ensures the image model downloads at startup even when audio models are present.
-
Your Name authored
- Removed redundant 'import os' statements inside functions (lines 4522, 4926, 5005) - Added back missing 'from llama_cpp import Llama' that was accidentally deleted - Global 'import os' at line 12 is now the only one This fixes the UnboundLocalError when running --list-cached-models or other CLI options.
-
Your Name authored
- --list-cached-models: List all cached models with sizes - --remove-all-models: Remove all cached models - --remove-model <modelid>: Remove specific model by name/hash (partial match)
-
Your Name authored
- Check if downloaded file is valid GGUF (magic bytes = 'GGUF') - If not valid, show clear error that URL is wrong (returns HTML instead) - Explain that URL must be direct download link ending in .gguf
-
Your Name authored
-
Your Name authored
- Enable verbose=True in llama.cpp to see actual error - Print GGUF model file size for debugging - Add try/except with traceback to see detailed errors
-