- 14 Mar, 2026 40 commits
-
-
Your Name authored
- Added --vae-tiling flag to enable VAE tiling for lower VRAM usage - Added --clip-on-cpu flag to run CLIP on CPU to save VRAM - Both options work with stable-diffusion-cpp-python
-
Your Name authored
- Added --image-seed argument to set default seed for image generation - Updated diffusers and sd.cpp code to use request seed or CLI default seed - Priority: request seed > CLI default seed > random
-
Your Name authored
- Added model_key initialization before sd.cpp loading in on-demand section - Added model_key assignment before adding model to manager
-
Your Name authored
- Fixed model_key variable scope issue in GGUF->sd.cpp fallback - Fixed model_path undefined in diffusers preloading section - These fixes prevent startup crashes when using --loadall
-
Your Name authored
- Reordered the image generation backend priority to try torch/diffusers first - If torch/diffusers fails (ImportError or other error), fallback to stable-diffusion-cpp-python - If both backends fail, return a helpful error message with installation instructions - Added dynamic loading of sd.cpp model if not pre-loaded
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
- Add debug output to trace audio model registration at startup - Add debug output when audio endpoint checks for audio_model - Fix global load_mode to be updated at startup based on --loadall/--loadswap flags
-
Your Name authored
- Fix missing indentation in async with semaphore block - Fix invalid elif syntax in load_mode determination - Fix request.steps reference (field doesn't exist in request model)
-
Your Name authored
-
Your Name authored
- Without --loadall: serialize all requests (one at a time) - With --loadall: allow one concurrent request per model
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
These parameters are not available in the installed version of stable-diffusion-cpp-python. The engine will auto-detect the architecture from file headers.
-
Your Name authored
- Added is_huggingface_model_id() helper to detect HF model IDs - Added download_huggingface_model() to download from HF Hub - Updated download logic to handle model IDs, URLs, and local paths
-
Your Name authored
- Renamed CLI argument from --clip-l-path to --llm-path - Updated all references from args.clip_l_path to args.llm_path - Changed config dictionary key from 'clip_l_path' to 'llm_path' - Added model_type='z-image' and backend='vulkan' parameters
-
Your Name authored
The bug was that the code checked 'if clip_l_path:' but clip_l_path was just set to None above, so the download block was always skipped. Changed to check 'if args.clip_l_path:' to properly detect when the --clip-l-path CLI argument is provided and trigger the download/caching.
-
Your Name authored
- Changed incorrect 'elif clip_l_path:' to proper 'if clip_l_path:' logic - Fixed vae_path assignment to use args.vae_path instead of undefined vae_path variable - This ensures CLIP LLM and VAE paths are properly passed when using --clip-l-path and --vae-path CLI arguments with Vulkan backend
-
Your Name authored
These are txt2img() parameters, not constructor parameters
-