- 19 Mar, 2026 30 commits
-
-
Your Name authored
- In ondemand mode (no --load-all or --loadswap specified), when a new model is requested, the current model in VRAM is now fully unloaded before loading the new one. This ensures clean model switching. - Added cleanup logic to both /v1/chat/completions and /v1/completions endpoints - Added same logic to image generation endpoints (diffusers and sd.cpp paths) - Cleanup includes: model cleanup, gc.collect(), torch.cuda.empty_cache()
-
Your Name authored
Root cause: The refactored code was hardcoding torch.float16 for CUDA, ignoring the --image-precision bf16 CLI argument. The Z-Image-Turbo model requires bfloat16 precision - using float16 causes NaN values in the image processor, resulting in all-black images. Also restored the original model loading logic with: - GGUF model detection (skip diffusers for GGUF) - OOM retry with progressive memory optimization - use_safetensors=True - Sequential CPU offload support
-
Your Name authored
- Changed default image size from 512x512 back to 1024x1024 to match original coderai - Changed NaN handling from 0.5 to 0.0 to match original coderai
-
Your Name authored
- Added set_global_args call for images module in main.py - Each API module has its own global_args, so it needs to be set separately - Added debug logging to trace global_args in images.py
-
Your Name authored
- Fixed file path not being set in app.py for /v1/files endpoint - Fixed Host header parsing to correctly extract hostname without port - Added debug logging to trace URL construction and file serving
-
Your Name authored
-
Your Name authored
-
Your Name authored
Fix: Use DiffusionPipeline for custom model support (ZImagePipeline) - was using it in original code
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
- Default load mode is now 'loadall' (preload) instead of 'ondemand' - Only use ondemand when --nopreload is explicitly specified - Model will now be loaded at startup by default
-
Your Name authored
- get_model_for_request now triggers model loading if not already loaded - Added _load_default_model() method to load default model on demand - Added _load_model_by_name() method to load any model on demand - Fixes 503 'Model not loaded' error when requesting 'default' model
-
Your Name authored
- Show full request body without truncation - Include HTTP method, URL, and headers - Pretty-print JSON bodies
-
Your Name authored
- text.py had local global_debug variable that shadowed the state module - Changed text.py to import get_global_debug from state module - Changed set_global_debug() in text.py to call state module's function - Changed all 'if global_debug:' to 'if get_global_debug():' in text.py - log.py was already using get_global_debug() correctly
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
- Create codai/api/state.py for shared global state functions - images.py now imports get_load_mode from state instead of app - app.py re-exports functions from state for backward compatibility
-
Your Name authored
- Move parse_args to codai.cli - Move main() to codai.main - Simplify coderai to be a thin wrapper importing from codai package - Create codai.api module with organized endpoints: - codai/api/app.py: FastAPI app, /v1/models, /v1/files, get_load_mode - codai/api/text.py: /v1/chat/completions, legacy /v1/completions - codai/api/images.py: /v1/images/generations - codai/api/transcriptions.py: /v1/audio/transcriptions - codai/api/tts.py: /v1/audio/speech - coderai is now backward compatible entry point only
-
- 18 Mar, 2026 10 commits
-
-
Your Name authored
- Fixed AttributeError where Tool.get() was called on Pydantic model - Added isinstance() checks to handle both dict and Pydantic Tool formats - This fixes the error when using --force-reasoning with tools
-
Your Name authored
- Added repeat_penalty, presence_penalty, frequency_penalty params to generate() and generate_stream() - Changed from **kwargs to explicit parameters to match base class abstract methods This fixes the TypeError when calling VulkanBackend.generate_stream() with extra params.
-
Your Name authored
- Added missing parameters to generate() and generate_stream() methods - Updated _generate_normal() and _generate_stream_normal() to use these params - Also updated base.py abstract method signatures to match This fixes the TypeError when using repeat_penalty with NVIDIA backend.
-
Your Name authored
New patterns added to repair_broken_tool_calls(): 1. Pattern 4: <tool><function>NAME</function><parameters>XML</parameters></tool> - Converts XML parameters to JSON format - Fills missing required params (e.g., path for list_files) 2. Pattern 0a: <tool><NAME><params></NAME></tool> (with closing tool name tag) - Handles format with closing tag for tool name 3. Expanded guard to detect known tool names used as wrapper tags - Now detects <fetch_instructions>, <list_files>, etc. 4. Fixed closure bug in Pattern -2 (wrong wrapper tags) - Used default argument to capture loop variable correctly 5. Post-processing: Fill missing required parameters - list_files gets path='.' if missing - search_files gets path='.' if missing All 6 test cases pass: - <tool><function>list_files</function><parameters>...</parameters></tool> -> OK - <fetch_instructions><task>read_file</task>...</fetch_instructions> -> OK - <tool_call><list_files></list_files></tool_call> -> OK - <tool><list_files><path>.</path></list_files></tool> -> OK - Valid JSON passthrough -> OK - Missing required params auto-filled -> OK
-
Your Name authored
Added 3 new repair patterns to handle additional hallucinated formats: 1. Pattern -1: Fix <tool_call> wrapper format - Converts <tool_call><list_files></list_files> to proper <tool> wrapper - Handles nested <tool_call><tool>...</tool></tool_call> format 2. Pattern -2: Handle wrong wrapper tags - Fixes when model uses tool name as wrapper: <fetch_instructions>...</fetch_instructions> - Converts to proper JSON format: {"name": "tool_name", "arguments": {...}} - Supports all known tools (read_file, list_files, etc.) 3. Pattern -3: Handle incomplete tool calls with missing parameters - Detects <tool><list_files></list_files> (no parameters) - Provides sensible defaults: list_files gets path='.' and recursive=False - Prevents extraction failures due to missing required parameters These patterns fix the hallucination issues observed in debug.log where the model produces broken XML formats despite --ggg (grammar-guided generation) being enabled. -
Your Name authored
- Added repair_broken_tool_calls() function that handles common hallucinated formats: - <tool><tool_name><param>value</param></tool> (missing closing tag) - <tool><tool_name><param>value</param></tool_name></tool> - Simple format: <tool><list_files><path>.</path><recursive>true</recursive></tool> - Integrated repair into: - QwenParser.parse() - primary parser for Qwen models - ToolCallParser.extract_tool_calls() - fallback parser - ModelParserAdapter.extract_tool_calls() - adapter wrapper The repair converts broken XML format to valid JSON: <tool><list_files><path>.</path><recursive>true</recursive></tool> becomes: <tool>{"name": "list_files", "arguments": {"path": ".", "recursive": true}}</tool> This fixes tool call extraction when the model hallucinates broken XML tags. -
Your Name authored
- Fixed streaming mode pipeline issues: - Fixed n-gram counting to handle partial matches correctly - Added per-chunk filtering to prevent duplicate n-grams across chunks - Optimized regex patterns (~35 patterns pre-compiled): - Pre-compiled all regex patterns for better performance - Added false positive protection with length-based filtering - Optimized tool call parsing in parser.py - Added grammar-guided generation (--ggg / --grammar-guided-gen): - New GBNF grammar file (tool_call_grammar.gbnf) for tool call parsing - Grammar loading utilities in models/grammar.py - Vulkan backend: Added GBNF grammar support via llama_generate_grammar - CUDA backend: Added outlines support for structured output - Added prompt distillation (--tools-closer-prompt): - New CLI option --tools-closer-prompt for prompt distillation - Enables generating distilled tool descriptions for better accuracy
-
Your Name authored
- Add repetition filtering for model output (n-gram detection) - Improve reasoning extraction to exclude tool call content - Add JSON validation for extracted tool calls - Ensure fixes work in both streaming and non-streaming modes
-
Your Name authored
- Add pattern for <tool>{JSON}</tool> format - Handles: <tool>{"name": "web_search", "arguments": {...}}</tool> -
Your Name authored
- Add patterns to handle <arguments><command>...</command></arguments> format - Add patterns to handle <tool_call><tool><name>...<arguments>...nested XML...</arguments></tool></tool_call> - Fix tool call argument extraction for nested XML formats
-