Update README and documentation with author, repository, and donation information

parent f32637ee
Pipeline #218 canceled with stages
......@@ -4,6 +4,14 @@
AISBF is a modular proxy server for managing multiple AI provider integrations. It provides a unified API interface for interacting with various AI services (Google, OpenAI, Anthropic, Ollama) with support for provider rotation and error tracking.
## Author
Stefy Lanza <stefy@nexlab.net>
## Repository
Official repository: https://git.nexlab.net/nexlab/aisbf.git
## Project Structure
```
......
# AI Proxy Server
# AISBF - AI Service Broker Framework || AI Should Be Free
A unified proxy server for multiple AI providers with proper type handlers and library integration.
A modular proxy server for managing multiple AI provider integrations with unified API interface.
## Architecture
## Author
The proxy is organized into multiple modules for better maintainability:
Stefy Lanza <stefy@nexlab.net>
- `main.py` - Entry point and FastAPI application setup
- `config.py` - Configuration management and provider loading
- `models.py` - Pydantic models for data structures
- `providers.py` - Provider type handlers with proper library integration
- `handlers.py` - Request/response handling logic
## Repository
## Supported Providers
The proxy supports multiple AI providers with proper type handlers:
- **Google** - Uses `google-genai` library
- **OpenAI** - Uses `openai` library
- **Anthropic** - Uses `anthropic` library
- **Ollama** - Uses direct HTTP requests
Official repository: https://git.nexlab.net/nexlab/aisbf.git
## Installation
## Quick Start
### Installation
```bash
pip install -r requirements.txt
python setup.py install
```
## Usage
Start the proxy server:
### Usage
```bash
./start_proxy.sh
```
The server will start on `http://localhost:8000`
## API Endpoints
### Chat Completions
```http
POST /api/{provider_id}/chat/completions
aisbf
```
**Request Body:**
```json
{
"model": "model_name",
"messages": [
{"role": "user", "content": "Hello!"}
],
"max_tokens": 100,
"temperature": 0.7,
"stream": false
}
```
### List Models
Server starts on `http://localhost:8000`
```http
GET /api/{provider_id}/models
```
## Supported Providers
- Google (google-genai)
- OpenAI and openai-compatible endpoints (openai)
- Anthropic (anthropic)
- Ollama (direct HTTP)
## Configuration
See `config/providers.json` and `config/rotations.json` for configuration examples.
Providers are configured in `providers.json` with proper type definitions:
```json
{
"providers": {
"openai": {
"id": "openai",
"name": "OpenAI",
"endpoint": "https://api.openai.com/v1",
"type": "openai",
"api_key_required": true
}
}
}
```
## API Endpoints
- `GET /` - Server status and provider list
- `POST /api/{provider_id}/chat/completions` - Chat completions
- `GET /api/{provider_id}/models` - List available models
## Error Handling
The proxy includes robust error handling with:
- Rate limiting for failed requests
- Automatic retry with provider rotation
- Proper error tracking and logging
## Donations
The extension includes multiple donation options to support its development:
The project includes multiple donation options to support its development:
### Web3/MetaMask Donation
Works on any website - The Web3 donation is completely independent of the current page
......@@ -109,4 +61,10 @@ Always available regardless of browser setup
### Bitcoin Donation
Address: bc1qcpt2uutqkz4456j5r78rjm3gwq03h5fpwmcc5u
Traditional BTC donation method
\ No newline at end of file
Traditional BTC donation method
## Documentation
See `DOCUMENTATION.md` for complete API documentation, configuration details, and development guides.
## License
GNU General Public License v3.0
\ No newline at end of file
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment