- 04 Apr, 2026 7 commits
-
-
Your Name authored
-
Your Name authored
-
Your Name authored
v0.9.9: User-based configuration routing for providers, rotations, autoselects, and OAuth2 credentials - Config admin (from aisbf.json, user_id=None) saves configurations to JSON files - Database users save configurations to the database (user_providers, user_rotations, user_autoselects tables) - Dashboard endpoints check user type and route accordingly - File upload endpoint supports both config admin (files) and database users (database) - MCP server tools accept user_id parameter and route to appropriate storage - OAuth2 credential handling already implemented this pattern (Claude, Kilo, Codex) - Updated CHANGELOG.md, setup.py, and pyproject.toml
-
Your Name authored
-
Your Name authored
- Added find_config_file() to check config locations in correct order - Added get_host() to read server.host from config (defaults to 127.0.0.1) - Fixed get_port() to read from server.port instead of top-level port - Updated start_server() and start_daemon() to use config-based host - Updated CHANGELOG.md with the fix
-
Your Name authored
- Changed authenticate_with_device_flow() to request_device_code_flow() + poll_device_code_completion() - /dashboard/codex/auth/start now returns immediately with verification URI and user code - /dashboard/codex/auth/poll checks for completion status - Fixed poll_device_code_token to raise exception for 403/404 (pending state) - Dashboard JavaScript opens popup window with verification URI immediately
-
Your Name authored
- New codex provider type using OpenAI-compatible protocol - OAuth2 authentication via Device Authorization Grant flow - Provider handler in aisbf/providers/codex.py - OAuth2 handler in aisbf/auth/codex.py - Dashboard integration with authentication UI - Token refresh with automatic retry - API key exchange from ID token - Updated version to 0.9.8 in setup.py and pyproject.toml - Updated CHANGELOG.md, README.md, PYPI.md with Codex documentation - Added codex provider configuration to config/providers.json
-
- 03 Apr, 2026 12 commits
-
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
- Trigger: Press ArrowUp, ArrowDown, ArrowUp in sequence within 10 seconds - Opens a Snake game in a popup window - Classic Snake gameplay with difficulty selection - Retro arcade-style with Press Start 2P font
-
Your Name authored
- Fixed ImportError: count_messages_tokens missing by correcting setup.py data_files structure - Split data_files into separate entries to preserve subdirectory structure (aisbf/, aisbf/providers/, aisbf/providers/kiro/, aisbf/auth/) - Added 6 missing dashboard templates (user_index.html, user_providers.html, etc.) - Removed install_requires from setup.py (dependencies managed via venv) - Updated aisbf.sh to only install requirements on first venv creation - Added Kiro-cli and Kilocode OAuth2 provider support documentation - Removed duplicate entries in documentation (Token Usage Analytics, Claude OAuth2) - Provider module refactoring documentation updates - Version bumped to 0.9.4
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
-
Your Name authored
- Updated README.md with comprehensive documentation for new features: * User-Specific API Endpoints with Bearer token authentication * Adaptive Rate Limiting with learning from 429 responses * Model Metadata Extraction with automatic pricing/rate limit detection * Enhanced Analytics Filtering by provider/model/rotation * Updated Web Dashboard feature list - Updated DOCUMENTATION.md with detailed sections: * Adaptive Rate Limiting configuration and benefits * Model Metadata Extraction features and dashboard integration - Updated CHANGELOG.md: * Moved Unreleased section to version 0.9.2 (2026-04-03) * Added comprehensive list of new features and changes - Version bump to 0.9.2: * Updated pyproject.toml version * Updated aisbf/__init__.py version This release focuses on improving documentation coverage for recently added features including user-specific API endpoints, adaptive rate limiting, model metadata extraction, and analytics filtering.
-
- 01 Apr, 2026 14 commits
-
-
Your Name authored
Your theory was correct! Claude Code uses the Anthropic SDK with the authToken parameter (not apiKey) for OAuth2 authentication. From vendors/claude/src/services/api/client.ts lines 300-315: const clientConfig = { apiKey: isClaudeAISubscriber() ? null : apiKey || getAnthropicApiKey(), authToken: isClaudeAISubscriber() ? getClaudeAIOAuthTokens()?.accessToken : undefined, } return new Anthropic(clientConfig) Changes: - providers.py: Use auth_token=access_token (not api_key) for SDK client - claude_auth.py: Remove create_api_key() and get_api_key() methods (not needed - OAuth2 token is used directly with SDK auth_token) The create_api_key endpoint is only for creating API keys for use in other contexts (CI/CD, IDEs), not for the main CLI. -
Your Name authored
Claude Code doesn't use the OAuth2 access token directly for API requests. Instead, it exchanges the OAuth2 token for an API key via: POST https://api.anthropic.com/api/oauth/claude_cli/create_api_key Authorization: Bearer {oauth_access_token} This returns a 'raw_key' which is the actual API key used for API requests. Changes: - claude_auth.py: Add create_api_key() and get_api_key() methods - create_api_key(): Exchanges OAuth2 token for API key - get_api_key(): Gets stored API key or creates one if needed - providers.py: Update _get_sdk_client() to use API key instead of OAuth2 token This matches the Claude Code flow in vendors/claude/src/services/oauth/client.ts
-
Your Name authored
The Anthropic SDK's messages.stream() is a synchronous context manager, not async. For async streaming, we need to use messages.create(..., stream=True) which returns an async iterator of ServerSentEvent objects. Changed from: async with client.messages.stream(**request_kwargs) as stream: To: stream = await client.messages.create(**request_kwargs, stream=True) async for event in stream: -
Your Name authored
Major rewrite to use the official Anthropic Python SDK instead of direct HTTP calls, while maintaining our OAuth2 authentication flow. Key changes: - Use Anthropic SDK client with OAuth2 token as api_key - SDK handles proper message format conversion - SDK handles automatic retries (max_retries=3) - SDK handles proper streaming event parsing - SDK handles correct headers and beta features - Better error handling and rate limit management This should fix the rate limiting issues we were seeing with direct HTTP calls, as the SDK implements proper retry logic and request formatting. New methods: - _get_sdk_client(): Creates SDK client with OAuth2 token - _handle_streaming_request_sdk(): SDK-based streaming handler - get_cache_stats(): Returns cache usage statistics Removed methods: - _request_with_retry(): No longer needed (SDK handles retries) - _handle_streaming_request_with_retry(): Replaced by SDK streaming - _handle_streaming_request(): Replaced by SDK streaming
-
Your Name authored
Phase 1.2 - Automatic retry with exponential backoff: - Add _request_with_retry() method for non-streaming requests - Retries on 429 (with x-should-retry header), 529, 503 errors - Exponential backoff with jitter (1s, 2s, 4s max 30s) - Handles timeouts and HTTP errors gracefully Phase 1.3 - Streaming idle watchdog: - Add 90s idle timeout detection (matches vendors/claude) - Tracks last_event_time and raises TimeoutError on idle - Prevents indefinite hangs on dropped connections Phase 2.3 - Cache token tracking: - Add cache_stats dict to track cache hits/misses - Track cache_tokens_read and cache_tokens_created - Add get_cache_stats() method for analytics - Updates stats during streaming message_delta events Also includes: - Temperature fix (skip 0.0 when thinking beta active) - Rate limit config update (5s default for Claude)
-
Your Name authored
- Comprehensive analysis of potential improvements - Recommended improvements without SDK migration: 1. Message validation pipeline (HIGH priority) 2. Automatic retry with exponential backoff (HIGH) 3. Streaming idle watchdog (MEDIUM) 4. Token counting and context management (MEDIUM) 5. Cache token tracking (LOW) - SDK migration analysis with pros/cons - Recommendation: Don't migrate yet, implement quick wins first - Hybrid approach evaluation for future consideration
-
Your Name authored
- Claude API requires temperature: 1.0 when thinking is enabled - Our Anthropic-Beta header includes interleaved-thinking-2025-05-14 - Sending temperature: 0.0 with thinking beta causes API errors - Now only add temperature to payload if > 0
-
Your Name authored
- Changed rate_limit from 0 to 5 seconds for Claude provider - Changed rate_limit from 0 to 5 seconds for all Claude models - This adds a minimum 5-second delay between requests to avoid hitting Anthropic's OAuth2 API rate limits
-
Your Name authored
- Handle 'thinking' and 'redacted_thinking' in content_block_start events - Handle 'thinking_delta' events to accumulate thinking content during streaming - Handle 'signature_delta' events for thinking block signatures - Log thinking block completion with character count - Thinking content is accumulated but not emitted to client (stored for final response) - Matches original Claude Code streaming thinking implementation
-
Your Name authored
- Added detailed analysis of 3419-line claude.ts implementation - Expanded streaming comparison with 30+ features from original source - Updated message conversion comparison with normalizeMessagesForAPI details - Added comprehensive feature comparison table for streaming implementations - Documented advanced features: idle watchdog, stall detection, VCR support, cache break detection, cost tracking, memory cleanup, request ID tracking
-
Your Name authored
Analysis of debug.log showed 429 rate limit errors during streaming were not being caught by the retry logic because: 1. Streaming generators don't raise exceptions until consumed 2. Error message 'Claude API error (429): Error' didn't contain retry keywords Changes: 1. Added _handle_streaming_request_with_retry() wrapper that catches rate limit errors and re-raises with proper keywords 2. Added _wrap_streaming_with_retry() method that consumes streaming generator and retries with fallback models on rate limit errors 3. Updated retry logic to check for '429' keyword in error messages 4. Added exponential backoff with jitter before retry attempts 5. Improved error messages to include rate limit context This ensures that when streaming hits a 429 rate limit, the system will automatically retry with fallback models instead of failing.
-
Your Name authored
Add image content block handling to ClaudeProviderHandler: 1. Image Extraction (_extract_images_from_content): - Extract images from OpenAI message content format - Handle base64 data URLs (data:image/jpeg;base64,...) - Handle HTTP/HTTPS URL-based images - Convert to Anthropic image source format - Validate image size (5MB limit for base64) - Pass through existing Anthropic-format image blocks 2. Image Integration in Message Conversion: - Extract images from user message content blocks - Convert image_url blocks to Anthropic image source format - Add image blocks to anthropic_messages content array - Preserve text content alongside images Reference: vendors/kilocode image handling + vendors/claude multimodal support
-
Your Name authored
Add three robustness improvements to ClaudeProviderHandler: 1. Message Role Validation (_validate_messages): - Validate roles are one of: user, assistant, system, tool - Auto-fix unknown roles to 'user' - Ensure system messages only appear at start - Insert synthetic assistant messages between consecutive user messages - Merge consecutive assistant messages - Validate tool messages have tool_call_id - Reference: vendors/kilocode normalizeMessages() + ensure_alternating_roles() 2. Tool Result Size Validation (_truncate_tool_result): - Truncate oversized tool results with configurable limit (default 100k chars) - Add truncation notice with original length info - Reference: vendors/claude applyToolResultBudget 3. Model Fallback Support (handle_request refactoring): - Add _get_fallback_models() to read fallback list from config - Retry with fallback models on retryable errors (rate limit, overloaded) - Split into handle_request() (with retry) and _handle_request_with_model() (actual logic) - Log fallback attempts for debugging All methods integrated into handle_request() for automatic application.
-
Your Name authored
Add three key improvements to ClaudeProviderHandler: 1. Thinking Block Support (Phase 2.1): - Extract thinking/reasoning content from Claude API responses - Handle both 'thinking' and 'redacted_thinking' block types - Store thinking content in provider_options for downstream access - Reference: vendors/kilocode thinking support via AI SDK 2. Tool Call Streaming (Phase 2.2): - Parse content_block_start events for tool_use blocks - Stream tool call arguments via input_json_delta events - Emit tool calls in OpenAI streaming format on content_block_stop - Reference: fine-grained-tool-streaming-2025-05-14 beta feature 3. Detailed Usage Metadata (Phase 2.3): - Extract cache_read_input_tokens from API response - Extract cache_creation_input_tokens from API response - Add prompt_tokens_details and completion_tokens_details to usage - Log cache usage for analytics - Reference: vendors/kilocode session/index.ts usage extraction All methods integrated into _convert_to_openai_format() and _handle_streaming_request() for automatic application.
-
- 31 Mar, 2026 7 commits
-
-
Your Name authored
Add three key improvements to ClaudeProviderHandler based on comparison with vendors/kilocode implementation: 1. Tool Call ID Sanitization (_sanitize_tool_call_id): - Replace invalid characters in tool call IDs with underscores - Claude API requires alphanumeric, underscore, hyphen only - Reference: vendors/kilocode normalizeMessages() sanitization 2. Empty Content Filtering (_filter_empty_content): - Filter out empty string messages and empty text parts - Claude API rejects messages with empty content - Reference: vendors/kilocode normalizeMessages() filtering 3. Prompt Caching (_apply_cache_control): - Apply ephemeral cache_control to last 2 messages - Enable Anthropic's prompt caching feature for cost savings - Reference: vendors/kilocode applyCaching() All methods integrated into _convert_messages_to_anthropic() for automatic application during message conversion.
-
Your Name authored
Create docs/claude_provider_improvement_plan.md with detailed implementation plan for AISBF ClaudeProviderHandler improvements identified in the provider comparison analysis. Plan includes 10 improvements across 4 phases: - Phase 1 (Quick Wins): Tool call ID sanitization, empty content filtering, prompt caching - Phase 2 (Core): Thinking block support, tool call streaming, usage metadata - Phase 3 (Robustness): Message validation, tool result size limits, fallback - Phase 4 (Advanced): Image/multimodal support Each improvement includes: problem statement, reference implementation, detailed implementation steps, files to modify, and effort estimate. Total estimated effort: 24-37 hours across 4 weeks.
-
Your Name authored
Document now correctly compares only the three Claude provider implementations: - AISBF (aisbf/providers.py) - Direct HTTP with OAuth2 - vendors/kilocode (vendors/kilocode/packages/opencode/src/provider/) - AI SDK - vendors/claude (vendors/claude/src/) - Original Claude Code All tables and references now use these three sources exclusively. Removed all Kiro Gateway content which was unrelated to Claude.
-
Your Name authored
Kiro Gateway is an Amazon Q Developer implementation using AWS CodeWhisperer API, not a Claude provider. The comparison now focuses on actual Claude implementations: - AISBF Claude Provider (direct HTTP with OAuth2) - Original Claude Code (TypeScript/React from Anthropic) - KiloCode (TypeScript using @ai-sdk/anthropic) Removed all Kiro-related sections including: - Kiro Gateway architecture comparison - Kiro message conversion and tool handling - Kiro streaming (AWS Event Stream) - Kiro model name normalization - Kiro exclusive features (thinking injection, truncation recovery, etc.) Document now cleanly compares three Claude provider implementations.
-
Your Name authored
- Add KiloCode implementation analysis (vendors/kilocode/packages/opencode/src/provider/) - Compare KiloCode's AI SDK approach (@ai-sdk/anthropic) vs direct HTTP - Document KiloCode's features: automatic prompt caching, thinking support, message validation, reasoning variants, model management - Add comparison tables for architecture, message conversion, streaming, headers, model resolution, reasoning/thinking support, prompt caching - Document KiloCode exclusive features: empty content filtering, tool call ID sanitization, duplicate reasoning fix, provider option remapping, Gemini schema sanitization, unsupported part handling - Update summary with KiloCode strengths and additional improvement areas
-
Your Name authored
- Add comprehensive Kiro Gateway analysis alongside Claude Code comparison - Document Kiro's unified intermediate message format approach - Compare streaming implementations (SSE vs AWS Event Stream) - Document Kiro's advanced features: thinking injection, tool content stripping, image extraction, truncation recovery, model name normalization - Add comparison tables for architecture, message handling, tools, streaming - Identify patterns from Kiro that could improve AISBF (unified format, message validation, multimodal support)
-
Your Name authored
- Add comprehensive comparison of AISBF Claude provider vs original Claude Code source - Document message conversion, tool handling, streaming, and response parsing differences - Identify areas for improvement: thinking blocks, tool call streaming, usage metadata - Include all other pending changes across the codebase
-