Commit 862ec611 authored by nextime's avatar nextime

Initial commit: Add OLProxy with comprehensive documentation and GPLv3 licensing

- Add olproxy.py: Python proxy server bridging web chatbots to Ollama API
- Add README.md: Comprehensive documentation with logo, usage examples, and API reference
- Add CHANGELOG.md: Structured changelog following Keep a Changelog format
- Add LICENSE: GPLv3 license with copyright attribution to Stefy Lanza
- Add olproxy.jpg: Project logo
- All files include proper GPLv3 licensing and attribution
parents
# Changelog
All notable changes to the OLProxy project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
### Added
- Initial project documentation
- GPLv3 license implementation
- Comprehensive README with usage examples
## [0.1.0] - 2025-08-23
### Added
- Initial release of OLProxy
- Ollama API compatibility layer
- OpenAI API compatibility layer
- Playwright-based web chatbot integration
- Support for Grok AI via X/Twitter interface
- Support for Grok AI via grok.com interface
- Advanced response extraction with spy word technique
- Progressive content detection fallback system
- Heuristic response detection as final fallback
- CORS middleware for web application integration
- Persistent browser context for session management
- Command-line interface with configurable IP and port
- Support for connecting to existing browser instances via CDP
- Health check endpoint
- Comprehensive error handling and logging
- Multiple model configuration support
### Features
- **API Endpoints**:
- `GET /api/tags` - List available models
- `GET /api/show/{model}` - Show model details
- `POST /api/generate` - Generate text completion
- `POST /api/chat` - Chat completion
- `GET /api/ps` - List loaded models
- `GET /api/version` - Get version information
- `GET /` - Health check
- `GET /v1/models` - OpenAI-compatible model listing
- `POST /v1/chat/completions` - OpenAI-compatible chat completion
- **Response Extraction Strategies**:
- Primary: Spy word detection with flexible pattern matching
- Secondary: Progressive content monitoring with stability detection
- Fallback: Heuristic latest response detection
- **Browser Management**:
- Persistent browser context in `./playwright_data/`
- Automatic page management per chatbot model
- Support for headless and headed browser modes
- CDP connection support for external browser instances
- **Configuration**:
- Configurable chatbot endpoints via `CHATBOT_CONFIG`
- Customizable CSS selectors for different web interfaces
- Flexible spy word configuration per model
- Command-line argument support
### Technical Details
- Built with Python 3.7+ compatibility
- Uses `aiohttp` for async HTTP server
- Uses `playwright` for browser automation
- Implements comprehensive error handling
- Supports concurrent requests with proper browser page management
- Includes detailed logging for debugging and monitoring
### Known Limitations
- Dependent on web interface stability of target chatbot services
- Subject to rate limiting of underlying web services
- May require manual authentication for some services
- Response extraction reliability depends on consistent web UI patterns
---
## Version History
### Version Numbering
This project uses [Semantic Versioning](https://semver.org/):
- **MAJOR** version for incompatible API changes
- **MINOR** version for backwards-compatible functionality additions
- **PATCH** version for backwards-compatible bug fixes
### Release Notes
- **v0.1.0**: Initial stable release with core functionality
- Future versions will include additional chatbot integrations, improved response extraction, and enhanced error handling
---
## Contributing
When contributing to this project, please:
1. Update this changelog with your changes
2. Follow the established format and categorization
3. Include version bumps according to semantic versioning
4. Document breaking changes clearly
5. Add entries under "Unreleased" section until release
### Categories
- **Added** for new features
- **Changed** for changes in existing functionality
- **Deprecated** for soon-to-be removed features
- **Removed** for now removed features
- **Fixed** for any bug fixes
- **Security** for vulnerability fixes
\ No newline at end of file
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2025 Stefy Lanza <stefy@nexlab.net>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
For the full text of the GNU General Public License version 3,
please visit: https://www.gnu.org/licenses/gpl-3.0.html
\ No newline at end of file
# OLProxy - Ollama API Proxy for Web-based AI Chatbots
![OLProxy Logo](olproxy.jpg)
**OLProxy** is a Python-based proxy server that bridges web-based AI chatbots (like Grok on X/Twitter) to the standardized Ollama API format. This allows developers to use familiar Ollama API calls while leveraging free or accessible web-based AI services, providing a cost-effective solution for AI integration.
## Features
- **Ollama API Compatibility**: Full support for Ollama API endpoints (`/api/generate`, `/api/chat`, `/api/tags`, etc.)
- **OpenAI API Compatibility**: Support for OpenAI-compatible endpoints (`/v1/chat/completions`, `/v1/models`)
- **Web Chatbot Integration**: Uses Playwright to interact with web-based AI interfaces
- **Multiple AI Models**: Configurable support for different chatbot services
- **Smart Response Extraction**: Advanced "spy word" technique for reliable response extraction
- **CORS Support**: Built-in CORS middleware for web application integration
- **Persistent Browser Sessions**: Maintains browser context for efficient interactions
## Supported Models
Currently configured models include:
- `grok:latest` - Grok AI via X/Twitter interface
- `grok-beta:latest` - Grok AI via grok.com
- `llama2:latest` - Mapped to Grok interface
- `codellama:latest` - Mapped to Grok interface
## Installation
### Prerequisites
- Python 3.7+
- Playwright
- aiohttp
### Install Dependencies
```bash
pip install playwright aiohttp
playwright install chromium
```
### Clone and Run
```bash
git clone <repository-url>
cd olproxy
python olproxy.py
```
## Usage
### Basic Usage
Start the proxy server:
```bash
python olproxy.py
```
The server will start on `localhost:11434` by default (same as Ollama).
### Command Line Options
```bash
python olproxy.py --help
```
Options:
- `--ip`: Server IP address (default: localhost)
- `--port`: Server port (default: 11434)
- `--connect`: Connect to existing browser via CDP (e.g., ws://localhost:9222)
### API Examples
#### Using Ollama API Format
```bash
# List available models
curl http://localhost:11434/api/tags
# Generate text
curl -X POST http://localhost:11434/api/generate \
-H "Content-Type: application/json" \
-d '{
"model": "grok:latest",
"prompt": "Explain quantum computing"
}'
# Chat completion
curl -X POST http://localhost:11434/api/chat \
-H "Content-Type: application/json" \
-d '{
"model": "grok:latest",
"messages": [
{"role": "user", "content": "Hello, how are you?"}
]
}'
```
#### Using OpenAI API Format
```bash
# List models
curl http://localhost:11434/v1/models
# Chat completion
curl -X POST http://localhost:11434/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "grok:latest",
"messages": [
{"role": "user", "content": "Write a Python function to calculate fibonacci"}
]
}'
```
## Configuration
### Adding New Chatbot Services
Edit the `CHATBOT_CONFIG` dictionary in `olproxy.py`:
```python
CHATBOT_CONFIG = {
"your-model:latest": {
"url": "https://your-chatbot-site.com",
"input_selector": "textarea", # CSS selector for input field
"send_button_selector": "button[type='submit']", # CSS selector for send button
"container_selector": "#chat-container", # CSS selector for chat container
"spy_word_base": "SPYWORD_123" # Base word for response detection
}
}
```
### Browser Configuration
The proxy uses Playwright with a persistent browser context stored in `./playwright_data/`. This allows:
- Session persistence across restarts
- Login state maintenance
- Reduced setup time for subsequent requests
## How It Works
### Response Extraction Strategy
OLProxy uses a sophisticated multi-layered approach to extract AI responses:
1. **Spy Word Detection**: Injects unique markers into prompts to identify response boundaries
2. **Progressive Content Monitoring**: Tracks content changes in real-time
3. **Heuristic Fallback**: Uses DOM analysis to identify likely bot responses
### Request Flow
1. Client sends request to OLProxy API endpoint
2. OLProxy extracts model and prompt information
3. Browser navigates to configured chatbot URL (if not already there)
4. Prompt is injected with spy words and submitted
5. Response is monitored and extracted using multiple detection strategies
6. Clean response is returned in Ollama/OpenAI format
## API Endpoints
### Ollama API Endpoints
- `GET /api/tags` - List available models
- `GET /api/show/{model}` - Show model details
- `POST /api/generate` - Generate text completion
- `POST /api/chat` - Chat completion
- `GET /api/ps` - List loaded models
- `GET /api/version` - Get version information
- `GET /` - Health check
### OpenAI API Endpoints
- `GET /v1/models` - List available models
- `POST /v1/chat/completions` - Chat completion
## Limitations
- **Web Interface Dependency**: Relies on web chatbot interfaces which may change
- **Rate Limiting**: Subject to the rate limits of underlying web services
- **Authentication**: May require manual login to web services
- **Reliability**: Web scraping approach may be affected by UI changes
## Troubleshooting
### Common Issues
1. **Browser fails to start**: Ensure Playwright is properly installed
```bash
playwright install chromium
```
2. **Response extraction fails**: Check if the web interface has changed
- Update CSS selectors in `CHATBOT_CONFIG`
- Check browser console for errors
3. **Authentication required**:
- Run with `--headless=False` to manually log in
- Browser state is persisted in `./playwright_data/`
### Debugging
Enable debug logging:
```python
logging.basicConfig(level=logging.DEBUG)
```
## Contributing
1. Fork the repository
2. Create a feature branch
3. Add support for new chatbot services
4. Submit a pull request
## License
This project is licensed under the GNU General Public License v3.0 (GPLv3) - see the [LICENSE](LICENSE) file for details.
## Author
**Stefy Lanza** - stefy@nexlab.net
## Disclaimer
This tool is for educational and development purposes. Users are responsible for complying with the terms of service of the underlying chatbot platforms. The authors are not responsible for any misuse or violations of third-party terms of service.
## Support
For issues, questions, or contributions, please open an issue on the project repository.
\ No newline at end of file
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment