- 14 Mar, 2026 29 commits
-
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
- Add debug output to trace audio model registration at startup - Add debug output when audio endpoint checks for audio_model - Fix global load_mode to be updated at startup based on --loadall/--loadswap flags
-
Your Name authored
- Fix missing indentation in async with semaphore block - Fix invalid elif syntax in load_mode determination - Fix request.steps reference (field doesn't exist in request model)
-
Your Name authored
-
Your Name authored
- Without --loadall: serialize all requests (one at a time) - With --loadall: allow one concurrent request per model
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
These parameters are not available in the installed version of stable-diffusion-cpp-python. The engine will auto-detect the architecture from file headers.
-
Your Name authored
- Added is_huggingface_model_id() helper to detect HF model IDs - Added download_huggingface_model() to download from HF Hub - Updated download logic to handle model IDs, URLs, and local paths
-
Your Name authored
- Renamed CLI argument from --clip-l-path to --llm-path - Updated all references from args.clip_l_path to args.llm_path - Changed config dictionary key from 'clip_l_path' to 'llm_path' - Added model_type='z-image' and backend='vulkan' parameters
-
Your Name authored
The bug was that the code checked 'if clip_l_path:' but clip_l_path was just set to None above, so the download block was always skipped. Changed to check 'if args.clip_l_path:' to properly detect when the --clip-l-path CLI argument is provided and trigger the download/caching.
-
Your Name authored
- Changed incorrect 'elif clip_l_path:' to proper 'if clip_l_path:' logic - Fixed vae_path assignment to use args.vae_path instead of undefined vae_path variable - This ensures CLIP LLM and VAE paths are properly passed when using --clip-l-path and --vae-path CLI arguments with Vulkan backend
-
Your Name authored
These are txt2img() parameters, not constructor parameters
-
Your Name authored
When a URL is passed to --clip-l-path or --vae-path, the model is now automatically downloaded and cached.
-
Your Name authored
- Added --clip-l-path for specifying CLIP LLM model path - Added --vae-path for specifying VAE model path - Added --image-sample-method (default: res_multistep for Z-Image Turbo) - Added --image-steps (default: 4 for Z-Image Turbo) - Added --image-width (default: 512) - Added --image-height (default: 512) - Added --image-cfg-scale (default: 1.0 for Z-Image Turbo)
-
- 10 Mar, 2026 11 commits
-
-
Your Name authored
- When --debug is enabled, show full command line coderai was called with - Fixed GGUF image model key to use cached file path instead of URL (lines 4565 and 5124 now use model_path) - Removed redundant model_key assignment before model_path resolution
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
When GGUF image model fails to load with llama.cpp, try loading with stable-diffusion-cpp-python (sd.cpp) as fallback.
-
Your Name authored
-
Your Name authored
-
Your Name authored
Tell user to update llama.cpp instead of falling back to diffusers.
-
Your Name authored
If llama.cpp fails to load a GGUF image model (e.g., unsupported architecture like lumina2), try loading via diffusers instead.
-