• Your Name's avatar
    feat: implement smart request batching (v0.8.0) · 709b6f80
    Your Name authored
    - Add aisbf/batching.py module with RequestBatcher class
    - Implement time-based (100ms window) and size-based batching
    - Add provider-specific batching configurations (OpenAI: 10, Anthropic: 5)
    - Integrate batching with BaseProviderHandler
    - Add batching configuration to config/aisbf.json
    - Initialize batching system in main.py startup
    - Update version to 0.8.0 in setup.py and pyproject.toml
    - Add batching.py to setup.py data_files
    - Update README.md and TODO.md documentation
    - Expected benefit: 15-25% latency reduction
    
    Features:
    - Automatic batch formation and processing
    - Response splitting and distribution
    - Statistics tracking (batches formed, requests batched, avg batch size)
    - Graceful error handling and fallback
    - Non-blocking async queue management
    - Streaming request bypass (batching disabled for streams)
    709b6f80
Name
Last commit
Last update
aisbf Loading commit data...
config Loading commit data...
templates Loading commit data...
.gitignore Loading commit data...
.providers.json.swp Loading commit data...
AI.PROMPT Loading commit data...
API_EXAMPLES.md Loading commit data...
CHANGELOG.md Loading commit data...
DEBUG_GUIDE.md Loading commit data...
DOCUMENTATION.md Loading commit data...
LICENSE.txt Loading commit data...
MANIFEST.in Loading commit data...
PYPI.md Loading commit data...
README.md Loading commit data...
TODO.md Loading commit data...
aisbf.sh Loading commit data...
build.sh Loading commit data...
clean.sh Loading commit data...
cli.py Loading commit data...
main.py Loading commit data...
pyproject.toml Loading commit data...
requirements.txt Loading commit data...
screenshot.png Loading commit data...
setup.py Loading commit data...
start_proxy.sh Loading commit data...
test_google.sh Loading commit data...
test_proxy.sh Loading commit data...
test_response_cache.py Loading commit data...