Recover repository after corruption - restore all files

parents
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# IDE
.vscode/
.idea/
*.swp
*.swo
*~
# OS
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db
# Project specific
logs/
videos/
*.db
*.sqlite
*.sqlite3
data/
temp/
build_output/
When the user asks to update the version to a specific version number (e.g., "update the version to 1.0.10"), follow these steps:
1. Parse the target version from the user's request. For example, if they say "update to 1.0.10", the target version is 1.0.10.
2. Calculate the current version by decrementing the patch version by 1. For 1.0.10, the current version is 1.0.9.
3. Search the entire codebase for all occurrences of:
- The current semver format: e.g., "1.0.9"
- The current release format: e.g., "1.0r9"
4. Replace all found instances:
- Replace "1.0.9" with "1.0.10"
- Replace "1.0r9" with "1.0r10"
5. Use regex search to find these patterns across all files in the project.
6. Update all occurrences found in the search results using apply_diff for precise changes.
7. Update the build/package version in build.py:
- Locate the BUILD_CONFIG dictionary
- Update 'app_version': 'current_version' to 'app_version': 'target_version'
This ensures consistent version updates across all version references in the codebase, including user agent strings, application versions, version constants, and build/package metadata.
\ No newline at end of file
# MbetterClient API Authentication Documentation
## Overview
The MbetterClient web dashboard implements a comprehensive authentication system that supports multiple authentication methods and role-based access control. This document outlines how authentication works and which endpoints require authentication.
## Authentication Methods
### 1. Bearer Token Authentication
- **JWT Tokens**: Short-lived tokens (24 hours default) obtained via `/auth/token` endpoint
- **API Tokens**: Long-lived tokens (1 year default) created via `/api/tokens` endpoint
- **Header**: `Authorization: Bearer <token>`
### 2. Web Session Authentication
- Flask-Login based session authentication for web interface
- Automatic fallback when no Bearer token is provided
- Requires active login session
### 3. Localhost Auto-Authentication
- Requests from `127.0.0.1` or `localhost` are automatically authenticated as admin
- No token or session required for local development/testing
## User Roles
### Admin User
- Full system access
- Can manage users, configuration, and all system functions
- `role = 'admin'` or `is_admin = True`
### Cashier User
- Limited access for betting operations
- Can create/view bets, verify bets, manage cashier-specific functions
- `role = 'cashier'`
### Normal User
- Basic access to betting and viewing functions
- Cannot access administrative functions
- `role = 'normal'` (default)
## Authentication Decorators
### `@get_api_auth_decorator()`
- Requires authentication (any authenticated user)
- Accepts both Bearer tokens and web sessions
- Localhost requests auto-authenticated as admin
### `@get_api_auth_decorator(require_admin=True)`
- Requires admin-level authentication
- Only admin users can access these endpoints
- Accepts both Bearer tokens and web sessions
## API Endpoints by Authentication Level
### Public Endpoints (No Authentication Required)
These endpoints are accessible without any authentication and are intended for public access:
- `GET /api/status` - System status
- `GET /api/debug/match-status` - Debug match status (admin only)
- `GET /api/verify-bet/<uuid:bet_id>` - Bet verification (public for bet checking)
- `GET /api/verify-barcode` - Barcode verification (public for bet checking)
- `GET /api/barcode/<uuid:bet_id>` - Generate bet barcode (public for bet checking)
- `GET /api/barcode-data/<uuid:bet_id>` - Get barcode data (public for bet checking)
- `GET /api/templates/<template_name>` - Template preview (public for display)
### Authenticated Endpoints (Any Logged-in User)
These endpoints require authentication but accept any user role:
- `GET /api/video/status` - Video player status
- `POST /api/video/control` - Video player control
- `POST /api/overlay` - Update video overlay
- `GET /api/templates` - Get available templates
- `GET /api/config` - Get configuration
- `GET /api/config/<section>` - Get config section
- `POST /api/config/<section>` - Update config section (admin only)
- `POST /api/config` - Update configuration (admin only)
- `GET /api/config/match-interval` - Get match interval
- `POST /api/config/match-interval` - Set match interval
- `GET /api/config/license-text` - Get license text
- `POST /api/config/license-text` - Set license text
- `POST /api/config/test-connection` - Test API connection (admin only)
- `GET /api/tokens` - Get user API tokens
- `POST /api/tokens` - Create API token
- `DELETE /api/tokens/<token_id>` - Revoke API token
- `GET /api/logs` - Get application logs (admin only)
- `POST /api/test-message` - Send test message (admin only)
- `POST /api/video/upload` - Upload video file
- `POST /api/video/delete` - Delete video file
- `POST /api/templates/upload` - Upload template (admin only)
- `DELETE /api/templates/<template_name>` - Delete template (admin only)
- `GET /api/outcome-assignments` - Get outcome assignments
- `POST /api/extraction/result-options/add` - Add result option
- `POST /api/extraction/result-options/delete` - Delete result option
- `GET /api/extraction/redistribution-cap` - Get redistribution cap (admin only)
- `POST /api/extraction/redistribution-cap` - Set redistribution cap (admin only)
- `GET /api/currency-settings` - Get currency settings
- `POST /api/currency-settings` - Set currency settings (admin only)
- `GET /api/match-timer/state` - Get match timer state
- `POST /api/match-timer/control` - Control match timer (admin only)
- `GET /api/cashier/bets` - Get cashier bets
- `POST /api/cashier/bets` - Create cashier bet
- `GET /api/cashier/bets/<bet_id>` - Get cashier bet details
- `DELETE /api/cashier/bets/<bet_id>` - Cancel cashier bet
- `GET /api/cashier/available-matches` - Get available matches for betting
- `DELETE /api/cashier/bet-details/<detail_id>` - Delete bet detail
- `DELETE /api/bets/<bet_id>` - Delete admin bet (admin only)
- `POST /api/cashier/bets/<bet_id>/mark-paid` - Mark cashier bet as paid
- `POST /api/bets/<bet_id>/mark-paid` - Mark admin bet as paid
- `GET /api/barcode-settings` - Get barcode settings
- `POST /api/barcode-settings` - Set barcode settings (admin only)
- `GET /api/qrcode-settings` - Get QR code settings
- `POST /api/qrcode-settings` - Set QR code settings (admin only)
- `GET /api/statistics` - Get extraction statistics
- `GET /api/statistics/<stats_id>` - Get statistics details
- `POST /api/system/shutdown` - Shutdown application (admin only)
- `POST /api/upload-intro-video` - Upload intro video (admin only)
### Admin-Only Endpoints
These endpoints require admin role authentication:
- `GET /api/debug/match-status` - Debug match status
- `POST /api/config/<section>` - Update config section
- `POST /api/config` - Update configuration
- `POST /api/config/test-connection` - Test API connection
- `GET /api/logs` - Get application logs
- `POST /api/test-message` - Send test message
- `POST /api/templates/upload` - Upload template
- `DELETE /api/templates/<template_name>` - Delete template
- `GET /api/extraction/redistribution-cap` - Get redistribution cap
- `POST /api/extraction/redistribution-cap` - Set redistribution cap
- `POST /api/currency-settings` - Set currency settings
- `POST /api/match-timer/control` - Control match timer
- `DELETE /api/bets/<bet_id>` - Delete admin bet
- `POST /api/barcode-settings` - Set barcode settings
- `POST /api/qrcode-settings` - Set QR code settings
- `POST /api/system/shutdown` - Shutdown application
- `POST /api/upload-intro-video` - Upload intro video
## Authentication Flow
### For API Requests:
1. Check if request is from localhost (127.0.0.1/localhost) → Auto-authenticate as admin
2. Check for `Authorization: Bearer <token>` header
3. If Bearer token present:
- Try to verify as JWT token
- If JWT fails, try to verify as API token
4. If no Bearer token:
- Check for active Flask-Login web session
- If session exists, use session user
5. If no authentication found → Return 401 Unauthorized
### For Admin-Required Endpoints:
1. Perform standard authentication (above)
2. Check if authenticated user has admin role (`role == 'admin'` or `is_admin == True`)
3. If not admin → Return 403 Forbidden
## Token Management
### JWT Tokens
- Created via `POST /auth/token` with username/password
- Short-lived (24 hours default)
- Stored in database for revocation tracking
- Include user info and expiration
### API Tokens
- Created via `POST /api/tokens` (authenticated users only)
- Long-lived (1 year default)
- Can be revoked individually
- User-specific
## Error Responses
### 401 Unauthorized
```json
{
"error": "Authentication required"
}
```
### 403 Forbidden
```json
{
"error": "Admin access required"
}
```
## Testing Authentication
### Local Development
- Requests from `127.0.0.1` or `localhost` are auto-authenticated as admin
- No tokens or sessions required for local testing
### Production Testing
1. **Get JWT Token:**
```bash
curl -X POST http://your-server/auth/token \
-H "Content-Type: application/json" \
-d '{"username": "admin", "password": "password"}'
```
2. **Use JWT Token:**
```bash
curl -H "Authorization: Bearer <token>" \
http://your-server/api/status
```
3. **Create API Token:**
```bash
curl -X POST -H "Authorization: Bearer <jwt_token>" \
http://your-server/api/tokens \
-H "Content-Type: application/json" \
-d '{"name": "Test Token", "expires_hours": 8760}'
```
## Security Considerations
1. **Token Storage**: Never store tokens in client-side localStorage for production
2. **HTTPS**: Always use HTTPS in production to protect token transmission
3. **Token Expiration**: Implement proper token refresh logic for long-running applications
4. **Rate Limiting**: Consider implementing rate limiting for authentication endpoints
5. **Audit Logging**: All authentication attempts are logged for security monitoring
## Implementation Notes
- The authentication system uses lazy initialization to avoid circular dependencies
- Localhost auto-authentication is intended for development only
- Role-based access control is enforced at the decorator level
- All authentication failures return appropriate HTTP status codes
- The system supports both programmatic API access and web interface access
\ No newline at end of file
This diff is collapsed.
This diff is collapsed.
# Minimal Prompt: Client-Side Last Sync Query Implementation
## What Changed on Server
Server now has a new endpoint to query last sync information:
**Endpoint**: `GET /api/reports/last-sync?client_id=<client_id>`
**Authentication**: Bearer token (API token)
**Response Format**:
```json
{
"success": true,
"client_id": "client_unique_identifier",
"last_sync_id": "sync_20260201_214327_abc12345",
"last_sync_timestamp": "2026-02-01T21:43:27.249Z",
"last_sync_type": "incremental",
"total_syncs": 25,
"last_sync_summary": {
"total_payin": 100000.0,
"total_payout": 95000.0,
"net_profit": 5000.0,
"total_bets": 50,
"total_matches": 10,
"cap_compensation_balance": 5000.0
},
"server_timestamp": "2026-02-01T21:43:27.249Z"
}
```
## What You Need to Implement
### 1. Add Function to Query Server
```python
def query_server_last_sync(api_token, client_id):
"""Query server for last sync information"""
import requests
url = "https://your-server.com/api/reports/last-sync"
headers = {"Authorization": f"Bearer {api_token}"}
params = {"client_id": client_id}
response = requests.get(url, headers=headers, params=params)
return response.json()
```
### 2. Call Before Each Sync
```python
# Before performing sync
server_info = query_server_last_sync(api_token, client_id)
if server_info.get('success'):
last_sync_id = server_info.get('last_sync_id')
last_sync_time = server_info.get('last_sync_timestamp')
# Compare with your local tracking
# If mismatch detected, perform full sync instead of incremental
```
### 3. Handle Recovery
If your local tracking is corrupted or lost:
```python
# If no local tracking exists
if not local_tracking_exists():
# Query server for last sync
server_info = query_server_last_sync(api_token, client_id)
# Recover local tracking from server state
if server_info.get('last_sync_id'):
update_local_tracking(
sync_id=server_info['last_sync_id'],
timestamp=server_info['last_sync_timestamp']
)
```
## Key Benefits
1. **Verify Server State**: Check what server has before syncing
2. **Detect Corruption**: Compare local tracking with server
3. **Auto-Recovery**: Restore local tracking from server if lost
4. **Prevent Data Loss**: Ensure no syncs are missed
## Integration Point
Add this call to your existing sync flow:
```python
# Existing sync flow
def perform_sync():
# NEW: Query server first
server_info = query_server_last_sync(api_token, client_id)
# Verify and recover if needed
if needs_recovery(server_info):
recover_from_server(server_info)
# Continue with normal sync
send_sync_data()
```
That's it! Just add the query call before your existing sync logic.
\ No newline at end of file
# MBetter Client Discovery Application
A Qt6-based LAN discovery tool that automatically detects MBetterClient servers on the local network and opens the dashboard in a web browser.
## Features
- **Automatic Discovery**: Listens for UDP broadcasts from MBetterClient servers on the local network
- **Cross-Platform**: Compatible with Linux and Windows
- **System Tray Support**: Runs minimized in the system tray
- **Auto-Open Browser**: Automatically opens discovered servers in the default web browser
- **Manual Connection**: Connect to servers manually by entering their URL
- **SSL Support Detection**: Identifies servers running with SSL/HTTPS enabled
- **Real-time Updates**: Shows discovered servers in real-time with timestamps
## Installation & Setup
### Quick Setup
1. Run the setup script to install dependencies:
```bash
python3 setup_discovery.py # Linux/Mac
python setup_discovery.py # Windows
```
2. Run the discovery application:
```bash
./run_discovery.sh # Linux/Mac
run_discovery.bat # Windows
```
### Manual Installation
1. Install required Python packages:
```bash
pip install PyQt6 netifaces
```
2. Run the application:
```bash
python3 mbetter_discovery.py
```
### Creating Standalone Executable
The setup script can create a standalone executable:
```bash
python3 setup_discovery.py
# When prompted, choose 'y' to create executable
```
The executable will be created in the `dist/` directory.
## How It Works
### UDP Broadcast Protocol
The MBetterClient main application broadcasts UDP packets every 30 seconds on port `45123` with the following JSON structure:
```json
{
"service": "MBetterClient",
"host": "192.168.1.100",
"port": 5001,
"ssl": false,
"url": "http://192.168.1.100:5001",
"timestamp": 1693910123.456
}
```
### Discovery Process
1. The discovery app listens on UDP port `45123` (configurable)
2. When a broadcast is received, it validates the JSON structure
3. If valid, the server is added to the discovered servers list
4. If auto-open is enabled, the browser opens the server's dashboard URL
5. The app shows a system tray notification about the discovery
## Usage
### Main Window
- **Settings Section**: Configure auto-open browser and listen port
- **Discovered Servers**: Shows all found servers with their URLs and discovery time
- **Manual Connection**: Enter a URL manually to connect to a server
- **Status & Log**: Real-time status and activity log
### System Tray
- Right-click the tray icon to access the menu
- Double-click to show/hide the main window
- The application continues running in the background when the window is closed
### Keyboard Shortcuts
- Double-click a server in the list to open it in the browser
- The application minimizes to tray when closed (if system tray is available)
## Configuration
### Settings
- **Auto-open Browser**: Automatically open discovered servers in the default browser
- **Listen Port**: UDP port to listen for broadcasts (default: 45123)
### Manual Connection
If you know the IP address of a MBetterClient server, you can connect manually:
1. Enter the URL in the "Manual URL" field (e.g., `http://192.168.1.100:5001`)
2. Click "Connect" to open it in the browser
## Network Requirements
### Firewall Configuration
Ensure the following ports are open:
- **UDP 45123**: For receiving broadcasts (incoming)
- **TCP 80/443**: For HTTP/HTTPS web access (outgoing)
### Network Discovery
The application works on:
- Local Area Networks (LAN)
- Wi-Fi networks
- Wired Ethernet networks
- VPN networks (if UDP broadcasts are allowed)
## Troubleshooting
### Common Issues
**No servers discovered:**
- Check that MBetterClient is running on the network
- Verify firewall settings allow UDP traffic on port 45123
- Ensure you're on the same network segment
- Try changing the listen port in settings
**Application won't start:**
- Ensure Python 3.7+ is installed
- Install required dependencies: `pip install PyQt6 netifaces`
- Check the log output for specific error messages
**Browser doesn't open automatically:**
- Check the "Auto-open browser" setting is enabled
- Manually click on discovered servers in the list
- Use the manual connection feature
**System tray not working:**
- Some Linux environments may not support system tray
- The application will still work without tray functionality
- Use the "Minimize to Tray" button to test tray support
### Debug Mode
Run with Python directly to see detailed logging:
```bash
python3 mbetter_discovery.py
```
Check the log output in the application window for troubleshooting information.
## Technical Details
### Dependencies
- **PyQt6**: GUI framework
- **netifaces**: Network interface detection (optional, improves broadcast detection)
- **Standard Python libraries**: socket, json, threading, webbrowser
### Protocol Details
- **Transport**: UDP broadcast
- **Port**: 45123 (configurable)
- **Format**: JSON
- **Frequency**: Every 30 seconds from MBetterClient servers
- **Scope**: Local network broadcast domain
### Security Notes
- UDP broadcasts are unencrypted and visible to all network users
- No authentication is required for discovery
- The discovery process only reads broadcast messages, it doesn't send any data
- Browser connections use the server's configured protocol (HTTP/HTTPS)
## Building from Source
### Development Setup
1. Clone or copy the discovery files:
- `mbetter_discovery.py` - Main application
- `setup_discovery.py` - Setup script
- `DISCOVERY_README.md` - This documentation
2. Install development dependencies:
```bash
pip install PyQt6 netifaces pyinstaller
```
3. Run directly:
```bash
python3 mbetter_discovery.py
```
### Creating Distribution
Use PyInstaller to create standalone executables:
```bash
# Single file executable
pyinstaller --onefile --windowed --name MBetterDiscovery mbetter_discovery.py
# Directory distribution (includes all dependencies)
pyinstaller --windowed --name MBetterDiscovery mbetter_discovery.py
```
## License
This discovery application is part of the MBetterClient project and follows the same licensing terms.
\ No newline at end of file
This diff is collapsed.
# MbetterClient Result Extraction Algorithm
## Overview
The Result Extraction Algorithm is the core component responsible for determining match outcomes in the MbetterClient betting system. It ensures fair play while maintaining system profitability through sophisticated redistribution controls and CAP (Controlled Redistribution) logic.
## Algorithm Flow
### Phase 1: Initialization and Data Collection
#### Step 1.1: Input Validation
- **Inputs**: `fixture_id`, `match_id`
- **Validation**: Ensure match exists and has pending bets
- **Initialization**: Set `selected_result = None`, `extraction_winning_outcome_names = []`
#### Step 1.2: Match Outcomes Retrieval
```sql
SELECT column_name, float_value FROM match_outcomes WHERE match_id = ?
```
- Retrieves all possible betting outcomes for the match (WIN1, WIN2, DRAW, KO1, KO2, UNDER, OVER, etc.)
- Each outcome has an associated coefficient (payout multiplier)
#### Step 1.3: Result Options Filtering
```sql
SELECT result_name FROM result_options
WHERE is_active = true
AND result_name IN (match_outcomes_list)
AND result_name NOT IN ('UNDER', 'OVER')
```
- Identifies active result options that correspond to match outcomes
- Excludes UNDER/OVER as they are handled separately
### Phase 2: Financial Analysis
#### Step 2.1: Total Payin Calculation
**Total Intake = UNDER/OVER Bets + Other Bets**
**UNDER/OVER Payin:**
```sql
SELECT SUM(amount) FROM bet_detail
WHERE match_id = ? AND outcome IN ('UNDER', 'OVER')
AND result = 'pending' AND result != 'cancelled'
```
**Other Bets Payin:**
```sql
SELECT SUM(amount) FROM bet_detail
WHERE match_id = ? AND outcome NOT IN ('UNDER', 'OVER')
AND result = 'pending' AND result != 'cancelled'
```
**Total Payin = under_payin + other_payin**
#### Step 2.2: UNDER/OVER Payout Calculation
For each UNDER/OVER outcome:
```
payout = bet_amount × coefficient
```
- UNDER payout = total UNDER bets × UNDER coefficient
- OVER payout = total OVER bets × OVER coefficient
#### Step 2.3: CAP Threshold Calculation
**Base CAP Threshold = Total Payin × CAP Percentage**
Where CAP Percentage defaults to 70% but is configurable.
**Adjusted CAP Threshold = Base CAP + Accumulated Shortfall**
**Final CAP Threshold = Adjusted CAP - UNDER/OVER Winner Payout**
If UNDER wins: `Final CAP = Adjusted CAP - under_payout`
If OVER wins: `Final CAP = Adjusted CAP - over_payout`
### Phase 3: Result Selection
#### Step 3.1: Payout Calculation for Each Result
For each possible result (WIN1, WIN2, DRAW, etc.):
**Find Associated Outcomes:**
```sql
SELECT outcome_name FROM extraction_associations
WHERE extraction_result = 'result_name'
```
**Calculate Total Result Payout:**
```
result_payout = 0
FOR EACH associated_outcome:
bet_amount = SUM(bets on associated_outcome)
coefficient = outcome_coefficient
result_payout += bet_amount × coefficient
```
**Example:**
- Result: "WIN1"
- Associated outcomes: "KO1" (coeff: 3.0), "SUB1" (coeff: 2.5)
- Bets: $10 on KO1, $15 on SUB1
- WIN1 payout = ($10 × 3.0) + ($15 × 2.5) = $30 + $37.50 = $67.50
#### Step 3.2: Eligibility Filtering
```
eligible_results = {result: payout for result, payout in payouts.items()
if payout <= final_cap_threshold}
```
#### Step 3.3: Fallback Logic
If no results are eligible (all payouts exceed CAP):
```
lowest_payout_result = min(payouts, key=payouts.get)
eligible_results = {lowest_payout_result: payouts[lowest_payout_result]}
```
#### Step 3.4: Weighted Random Selection
Among eligible results, select the one with maximum payout:
```
max_payout = max(eligible_results.values())
candidates = [r for r, p in eligible_results.items() if p == max_payout]
selected_result = random.choice(candidates)
```
### Phase 4: Winning Outcomes Determination
#### Step 4.1: Association Lookup
```sql
SELECT DISTINCT outcome_name FROM extraction_associations
WHERE extraction_result = selected_result
```
#### Step 4.2: Validation Against Match Outcomes
```
extraction_winning_outcome_names = [outcome for outcome in associated_outcomes
if outcome in match_possible_outcomes]
```
### Phase 5: Bet Result Updates
#### Step 5.1: UNDER/OVER Bet Processing
Based on stored `under_over_result` from video selection:
**If UNDER wins:**
- Mark UNDER bets as 'win' with `win_amount = bet_amount × under_coefficient`
- Mark OVER bets as 'lost'
**If OVER wins:**
- Mark OVER bets as 'win' with `win_amount = bet_amount × over_coefficient`
- Mark UNDER bets as 'lost'
#### Step 5.2: Selected Result Processing
Mark bets on `selected_result` as 'win':
```
win_amount = bet_amount × result_coefficient
```
#### Step 5.3: Associated Outcomes Processing
For each outcome in `extraction_winning_outcome_names`:
```
outcome_coefficient = get_coefficient(match_id, outcome)
win_amount = bet_amount × outcome_coefficient
```
#### Step 5.4: Loss Processing
Mark all other bets as 'lost':
```
losing_outcomes = [selected_result] + extraction_winning_outcome_names + ['UNDER', 'OVER']
UPDATE bet_detail SET result = 'lost'
WHERE match_id = ? AND outcome NOT IN losing_outcomes AND result = 'pending'
```
### Phase 6: Database Updates
#### Step 6.1: Match Result Storage
```sql
UPDATE matches SET
result = selected_result,
winning_outcomes = json.dumps(extraction_winning_outcome_names),
under_over_result = under_over_result,
result_breakdown = json.dumps({
'selected_result': selected_result,
'winning_outcomes': extraction_winning_outcome_names,
'under_over_result': under_over_result
})
WHERE id = match_id
```
#### Step 6.2: Statistics Collection
Store comprehensive extraction metrics in `extraction_stats` table:
- Total bets, amounts collected, redistributed
- UNDER/OVER statistics
- CAP applied status
- Result breakdown
#### Step 6.3: Redistribution Adjustment Tracking
**Expected Redistribution = Total Payin × CAP Percentage**
**Actual Redistribution = Sum of all win_amounts**
**Adjustment = Expected - Actual**
- Positive: Under-redistribution (shortfall) - accumulated to increase future CAP
- Negative: Over-redistribution (surplus) - accumulated to decrease future CAP
Update daily redistribution adjustment tracking for future CAP adjustments.
### Phase 7: System Notifications
#### Step 7.1: Result Broadcasting
Send `PLAY_VIDEO_RESULTS` message with:
- `fixture_id`, `match_id`
- `result = selected_result`
- `under_over_result`
- `winning_outcomes = extraction_winning_outcome_names`
#### Step 7.2: Match Completion
Send `MATCH_DONE` message to advance to next match.
## Key Design Principles
### CAP (Controlled Redistribution) Logic
- **Purpose**: Prevent excessive payouts that could harm profitability
- **Mechanism**: Only select results where total payout ≤ CAP threshold
- **Adjustment Factors**:
- Accumulated redistribution adjustments from previous extractions
- Committed UNDER/OVER payouts
- Total intake from all betting activity
### Multi-Level Winning System
- **Primary Result**: Main outcome selected by algorithm
- **Associated Outcomes**: Additional winning outcomes
- **UNDER/OVER Independence**: Parallel result system
### Profit Maximization
- **Weighted Selection**: Prefers results maximizing redistribution
- **Fallback Protection**: Always selects result, even if CAP exceeded
- **Adjustment Carryover**: Tracks and compensates for redistribution imbalances (both under and over)
### Data Integrity
- **Transaction Safety**: All updates in database transactions
- **Comprehensive Logging**: All decisions and calculations logged
- **Error Recovery**: Multiple fallback mechanisms
## Configuration Parameters
- **CAP Percentage**: Default 70%, configurable via `extraction_redistribution_cap`
- **Adjustment Tracking**: Daily accumulated redistribution adjustments affect future CAP calculations
- **Result Associations**: Configurable via `extraction_associations` table
- **Betting Mode**: Affects match status progression but not extraction logic
## Error Handling
- **Fallback Selection**: Random selection if extraction fails
- **CAP Override**: Selects lowest payout result if all exceed CAP
- **Transaction Rollback**: Database consistency maintained on errors
- **Logging**: Comprehensive debug information for troubleshooting
## Performance Considerations
- **Database Efficiency**: Single transactions for all updates
- **Memory Management**: Streaming result processing for large datasets
- **Concurrent Safety**: Match-level locking prevents race conditions
- **Audit Trail**: Complete history of all extraction decisions
\ No newline at end of file
# Full Resync Implementation Summary
## Overview
This document describes the implementation of full resync functionality for the reports sync system. The client now properly handles server responses indicating that a full resync is needed.
## Problem Statement
When sending a request to sync reports to the server and asking for the latest sync received, the server may answer that it is in need of a full resync because it doesn't have any record for this client. Additionally, the server can respond with null values for the last sync ID. In both cases, the client should start a full resync with the server.
## Implementation Details
### 1. Modified `query_server_last_sync` Method
**File:** `mbetterclient/api_client/client.py` (lines 1431-1481)
**Changes:**
- Added detection of `needs_full_resync` flag in server response
- Added detection of `null` values for `last_sync_id`
- When either condition is detected, the response is marked with `needs_full_resync: True`
- For 404 responses (no record for client), returns a dict indicating full resync is needed
**Key Logic:**
```python
if needs_full_resync or last_sync_id is None:
logger.warning(f"Server indicates full resync is needed (needs_full_resync={needs_full_resync}, last_sync_id={last_sync_id})")
data['needs_full_resync'] = True
return data
```
### 2. Modified `needs_recovery` Method
**File:** `mbetterclient/api_client/client.py` (lines 1531-1574)
**Changes:**
- Added check for `needs_full_resync` flag in server info
- Added check for `null` `last_sync_id` value
- When either condition is detected, calls `_clear_local_tracking()` to force full resync
- Returns `True` to indicate recovery is needed
**Key Logic:**
```python
needs_full_resync = server_info.get('needs_full_resync', False)
last_sync_id = server_info.get('last_sync_id')
if needs_full_resync or last_sync_id is None:
logger.warning(f"Server indicates full resync is needed (needs_full_resync={needs_full_resync}, last_sync_id={last_sync_id}) - clearing local tracking")
self._clear_local_tracking()
return True
```
### 3. Added `_clear_local_tracking` Method
**File:** `mbetterclient/api_client/client.py` (lines 1575-1592)
**Purpose:**
- Clears all local sync tracking records from the database
- Forces the next sync to be a full sync instead of incremental
**Implementation:**
```python
def _clear_local_tracking(self) -> bool:
"""Clear all local sync tracking records to force full resync"""
try:
session = self.db_manager.get_session()
try:
deleted_count = session.query(self.ReportsSyncTrackingModel).delete()
session.commit()
logger.info(f"Cleared {deleted_count} local sync tracking records to force full resync")
return True
except Exception as e:
logger.error(f"Failed to clear local tracking: {e}")
session.rollback()
return False
finally:
session.close()
except Exception as e:
logger.error(f"Error clearing local tracking: {e}")
return False
```
### 4. Modified `collect_report_data` Method
**File:** `mbetterclient/api_client/client.py` (lines 1014-1035)
**Changes:**
- Added check for `needs_full_resync` flag after recovery check
- When full resync is needed, forces `date_range = 'all'` to collect all data
- Logs warning message indicating full sync is being performed
**Key Logic:**
```python
needs_full_resync = server_info and server_info.get('needs_full_resync', False)
if needs_full_resync:
logger.warning("Server indicated full resync is needed - performing full sync")
date_range = 'all'
```
## Server Response Format
The server should respond to the `/api/reports/last-sync` endpoint with the following format:
### Normal Sync Response
```json
{
"success": true,
"needs_full_resync": false,
"last_sync_id": "sync_20240101_120000_abc123",
"last_sync_timestamp": "2024-01-01T12:00:00Z",
"total_syncs": 10
}
```
### Full Resync Response (No Record)
```json
{
"success": true,
"needs_full_resync": true,
"last_sync_id": null,
"last_sync_timestamp": null,
"total_syncs": 0
}
```
### Full Resync Response (Null Last Sync ID)
```json
{
"success": true,
"needs_full_resync": false,
"last_sync_id": null,
"last_sync_timestamp": "2024-01-01T12:00:00Z",
"total_syncs": 5
}
```
## Test Coverage
Created comprehensive test suite in `test_full_resync.py`:
1. **test_server_indicates_full_resync** - Verifies `needs_full_resync=True` triggers full resync
2. **test_server_null_last_sync_id** - Verifies null `last_sync_id` triggers full resync
3. **test_server_404_response** - Verifies 404 response triggers full resync
4. **test_clear_local_tracking** - Verifies local tracking is cleared
5. **test_normal_sync_no_full_resync** - Verifies normal sync doesn't trigger full resync
6. **test_collect_report_data_with_full_resync** - Verifies full sync is forced when needed
All tests pass successfully.
## Behavior Flow
### Scenario 1: Server Indicates Full Resync
1. Client queries server for last sync info
2. Server responds with `needs_full_resync: true`
3. `needs_recovery()` detects the flag and returns `True`
4. `_clear_local_tracking()` is called to clear all local tracking
5. `collect_report_data()` forces `date_range = 'all'`
6. Full sync is performed with all data
### Scenario 2: Server Returns Null Last Sync ID
1. Client queries server for last sync info
2. Server responds with `last_sync_id: null`
3. `needs_recovery()` detects the null value and returns `True`
4. `_clear_local_tracking()` is called to clear all local tracking
5. `collect_report_data()` forces `date_range = 'all'`
6. Full sync is performed with all data
### Scenario 3: Server Returns 404 (No Record)
1. Client queries server for last sync info
2. Server responds with 404 status code
3. `query_server_last_sync()` returns dict with `needs_full_resync: true`
4. `needs_recovery()` detects the flag and returns `True`
5. `_clear_local_tracking()` is called to clear all local tracking
6. `collect_report_data()` forces `date_range = 'all'`
7. Full sync is performed with all data
## Logging
The implementation includes comprehensive logging:
- `Server indicates full resync is needed (needs_full_resync={value}, last_sync_id={value}) - clearing local tracking`
- `Cleared {count} local sync tracking records to force full resync`
- `Server indicated full resync is needed - performing full sync`
## Benefits
1. **Automatic Recovery**: Client automatically detects when full resync is needed
2. **Data Consistency**: Ensures client and server are synchronized
3. **Robust Error Handling**: Handles multiple scenarios where full resync is needed
4. **Clear Logging**: Provides clear visibility into resync operations
5. **Test Coverage**: Comprehensive test suite ensures reliability
## Files Modified
1. `mbetterclient/api_client/client.py` - Core implementation
2. `test_full_resync.py` - Test suite (new file)
## Backward Compatibility
The implementation is backward compatible:
- Existing sync behavior is preserved when server doesn't indicate full resync
- Only triggers full resync when explicitly needed
- No changes to database schema or API contracts
\ No newline at end of file
This diff is collapsed.
This diff is collapsed.
# Headless HLS Streaming Implementation
## Overview
The headless HLS streaming functionality has been enhanced to work identically to the Qt player, with proper video completion tracking, message sending, and OVER/UNDER video sequence handling.
## Version Update
**Version: 1.0.14** (updated from 1.0.13)
## Key Features Implemented
### 1. Video Duration Detection
- Uses `ffprobe` to detect video duration in seconds
- Stores duration in `current_video_duration` variable
- Records video start time in `video_start_time` variable
### 2. Timer-Based Video Completion
- Sets up a timer when non-looping videos start
- Timer duration = video duration + 2 seconds (buffer)
- Timer triggers `_on_video_end_timer_expired()` callback
- Ensures videos play completely before moving to next
### 3. Video Completion Handling
The `_handle_video_completion()` method sends appropriate messages based on video type:
- **Match video completion**: Sends `PLAY_VIDEO_MATCH_DONE` message
- **Result video completion**: Sends `PLAY_VIDEO_RESULT_DONE` message
- **Intro video completion**: Switches to black screen
### 4. OVER/UNDER Video Sequence Handling
When a video filename starts with "OVER_" or "UNDER_":
1. Detects OVER/UNDER sequence requirement
2. Creates sequence list with both videos
3. Plays videos in order (OVER then UNDER, or UNDER then OVER)
4. After sequence completes, plays result video
5. Sends `PLAY_VIDEO_MATCH_DONE` for the last OVER/UNDER video
### 5. Message Bus Communication
The headless streamer communicates with other components using the same messages as Qt player:
- **PLAY_VIDEO_MATCH**: Start playing match video
- **PLAY_VIDEO_MATCH_DONE**: Notify match video completed
- **PLAY_VIDEO_RESULT**: Start playing result video
- **PLAY_VIDEO_RESULT_DONE**: Notify result video completed
- **START_INTRO**: Start intro video (loops until next match)
## Workflow
### Normal Match Flow
```
1. START_INTRO received
→ Play INTRO video (looping)
2. PLAY_VIDEO_MATCH received
→ Stop intro
→ Play match video (non-looping)
→ Set completion timer
3. Timer expires / video ends
→ Send PLAY_VIDEO_MATCH_DONE
→ Switch to black screen
4. PLAY_VIDEO_RESULT received
→ Play result video (non-looping)
→ Set completion timer
5. Timer expires / video ends
→ Send PLAY_VIDEO_RESULT_DONE
→ Switch to black screen
6. START_INTRO received (for next match)
→ Back to step 1
```
### OVER/UNDER Match Flow
```
1. START_INTRO received
→ Play INTRO video (looping)
2. PLAY_VIDEO_MATCH received (OVER_xxx.mp4 or UNDER_xxx.mp4)
→ Detect OVER/UNDER sequence
→ Play OVER video (non-looping)
→ Set completion timer
3. Timer expires / OVER video ends
→ Send PLAY_VIDEO_MATCH_DONE for OVER
→ Play UNDER video (non-looping)
→ Set completion timer
4. Timer expires / UNDER video ends
→ Send PLAY_VIDEO_MATCH_DONE for UNDER
→ Play result video (non-looping)
→ Set completion timer
5. Timer expires / result video ends
→ Send PLAY_VIDEO_RESULT_DONE
→ Switch to black screen
6. START_INTRO received (for next match)
→ Back to step 1
```
## Usage
### Starting Headless Mode
```bash
# Start application in headless mode
python main.py --headless
# With custom streamer port
python main.py --headless --streamer-port 5884
# With test mode (auto-starts intro)
python main.py --headless --test-stream
```
### Viewing the Stream
1. Open VLC Media Player
2. File → Open Network Stream
3. Enter: `http://127.0.0.1:5884/mbetterc_stream.m3u8`
4. Click Play
### Testing
Run the test script to verify functionality:
```bash
python test_headless_streaming.py
```
This will:
- Start the headless streamer
- Simulate games_thread behavior
- Send test messages (INTRO → MATCH → RESULT → INTRO)
- Verify message sending and video completion
- Log all events for debugging
## Technical Details
### Video Duration Detection
```python
def _get_video_duration(self, video_path: str) -> float:
"""Get video duration in seconds using ffprobe"""
cmd = [
'ffprobe',
'-v', 'error',
'-show_entries', 'format=duration',
'-of', 'default=noprint_wrappers=1:nokey=1',
str(video_path)
]
result = subprocess.run(cmd, capture_output=True, text=True, timeout=10)
if result.returncode == 0:
return float(result.stdout.strip())
return 0.0
```
### Timer Setup
```python
# Set up video completion timer if not looping
if not loop and duration > 0:
timer_delay = duration + 2.0 # Add buffer
logger.info(f"Setting video completion timer for {timer_delay:.2f} seconds")
self.video_end_timer = threading.Timer(timer_delay, self._on_video_end_timer_expired)
self.video_end_timer.daemon = True
self.video_end_timer.start()
```
### Message Sending
```python
# Match video completed
if self.current_video_type == 'match':
done_message = MessageBuilder.play_video_match_done(
sender=self.name,
match_id=self.current_match_id,
video_filename=self.current_match_video_filename,
fixture_id=self.current_fixture_id
)
self.message_bus.publish(done_message, broadcast=True)
# Result video completed
elif self.current_video_type == 'result':
done_message = MessageBuilder.play_video_result_done(
sender=self.name,
fixture_id=self.current_fixture_id,
match_id=self.current_match_id,
result=self.current_result
)
self.message_bus.publish(done_message, broadcast=True)
```
## Files Modified
1. **mbetterclient/core/rtsp_streamer.py**
- Added video duration detection
- Added timer-based completion tracking
- Added OVER/UNDER sequence handling
- Added proper message sending on video completion
- Enhanced video switching with loop parameter
2. **build.py**
- Updated version to 1.0.14
3. **main.py**
- Updated version to 1.0.14
4. **mbetterclient/config/settings.py**
- Updated version to 1.0.14
- Updated user_agent to "MbetterClient/1.0r14"
5. **mbetterclient/__init__.py**
- Updated version to 1.0.14
6. **mbetterclient/web_dashboard/app.py**
- Updated version to 1.0.14
7. **test_reports_sync_fix.py**
- Updated user_agent to "MbetterClient/1.0r14"
8. **test_headless_streaming.py** (new file)
- Comprehensive test script for headless streaming
- Simulates games_thread behavior
- Tests complete workflow
## Integration with Games Thread
The headless streamer integrates seamlessly with the games_thread:
1. **Games thread sends messages**:
- `START_INTRO`: Start intro video
- `PLAY_VIDEO_MATCH`: Start match video
- `PLAY_VIDEO_RESULT`: Start result video
2. **Headless streamer responds**:
- `PLAY_VIDEO_MATCH_DONE`: Match video completed
- `PLAY_VIDEO_RESULT_DONE`: Result video completed
3. **Games thread continues**:
- After `PLAY_VIDEO_MATCH_DONE`: Sends `PLAY_VIDEO_RESULT`
- After `PLAY_VIDEO_RESULT_DONE`: Sends `NEXT_MATCH` or `START_INTRO`
This ensures the headless streamer works identically to the Qt player from the games_thread's perspective.
## Troubleshooting
### Video Not Playing
- Check FFmpeg is installed: `ffmpeg -version`
- Check ffprobe is available: `ffprobe -version`
- Check video file exists and is readable
- Check logs for errors
### Messages Not Being Sent
- Enable debug mode: `--debug-player`
- Check message bus logs: `--dev-message`
- Verify message handlers are registered
### Timer Not Firing
- Check video duration is detected correctly
- Check timer is started (look for "Setting video completion timer" in logs)
- Check for exceptions in timer callback
### OVER/UNDER Not Working
- Check video filenames start with "OVER_" or "UNDER_"
- Check both videos exist in extracted directory
- Check logs for sequence detection
## Future Enhancements
Potential improvements for future versions:
1. **Real-time progress tracking**: Send VIDEO_PROGRESS messages during playback
2. **Seek support**: Allow seeking within videos (requires FFmpeg restart)
3. **Volume control**: Adjust audio volume dynamically
4. **Multiple audio tracks**: Support for different audio languages
5. **Subtitle support**: Overlay subtitles on video stream
## Conclusion
The headless HLS streaming implementation now provides full parity with the Qt player functionality, including:
✓ Video duration detection
✓ Timer-based completion tracking
✓ Proper message sending (PLAY_VIDEO_MATCH_DONE, PLAY_VIDEO_RESULT_DONE)
✓ OVER/UNDER video sequence handling
✓ Seamless integration with games_thread
✓ Complete workflow support (INTRO → MATCH → RESULT → INTRO)
This allows the application to run in headless mode with full video playback capabilities, suitable for deployment on servers or environments without a display.
\ No newline at end of file
MIT License
Copyright (c) 2025 MBetter Project
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
\ No newline at end of file
#!/bin/bash
# MbetterClient Comprehensive Compatibility Wrapper
# Automatically detects virtualization and hardware acceleration
# Applies Mesa GLX with transparency optimizations when needed
echo "=== MbetterClient Comprehensive Compatibility Wrapper ==="
echo "Detecting environment and applying optimal settings..."
echo ""
# Function to detect virtualization
detect_virtualization() {
local VIRT_TYPE=""
# VMware detection
if [ -f "/proc/driver/vmwgfx/version" ] || \
grep -q "VMware" /proc/scsi/scsi 2>/dev/null || \
lspci 2>/dev/null | grep -i vmware >/dev/null 2>&1; then
VIRT_TYPE="VMware"
fi
# VirtualBox detection
if [ -z "$VIRT_TYPE" ]; then
if [ -f "/proc/driver/vboxdrv" ] || \
lsmod 2>/dev/null | grep -q vbox || \
lspci 2>/dev/null | grep -i virtualbox >/dev/null 2>&1; then
VIRT_TYPE="VirtualBox"
fi
fi
# Other virtualization detection
if [ -z "$VIRT_TYPE" ]; then
if grep -q "QEMU" /proc/cpuinfo 2>/dev/null || \
[ -f "/dev/kvm" ] || \
systemd-detect-virt 2>/dev/null | grep -v "none" >/dev/null 2>&1; then
VIRT_TYPE="Other"
fi
fi
echo "$VIRT_TYPE"
}
# Function to test hardware acceleration
test_hw_acceleration() {
echo "Testing hardware acceleration availability..."
# Test 1: Check for NVIDIA GPU
if command -v nvidia-smi >/dev/null 2>&1; then
if nvidia-smi --query-gpu=name --format=csv,noheader >/dev/null 2>&1; then
echo "✓ NVIDIA GPU detected"
return 0
fi
fi
# Test 2: Check for AMD GPU
if lspci 2>/dev/null | grep -i "vga\|3d\|display" | grep -i "amd\|ati\|radeon" >/dev/null 2>&1; then
echo "✓ AMD GPU detected"
return 0
fi
# Test 3: Check for Intel integrated graphics
if lspci 2>/dev/null | grep -i "vga\|3d\|display" | grep -i "intel" >/dev/null 2>&1; then
echo "✓ Intel integrated graphics detected"
return 0
fi
# Test 4: Try to initialize OpenGL context
if command -v glxinfo >/dev/null 2>&1; then
if glxinfo 2>/dev/null | grep -q "OpenGL version"; then
echo "✓ OpenGL context available"
return 0
fi
fi
# Test 5: Check Vulkan availability
if command -v vulkaninfo >/dev/null 2>&1; then
if vulkaninfo --summary >/dev/null 2>&1; then
echo "✓ Vulkan available"
return 0
fi
fi
echo "✗ No hardware acceleration detected"
return 1
}
# Function to setup Mesa software rendering with optimizations
setup_mesa_software() {
echo "Setting up Mesa software rendering with transparency optimizations..."
# Create optimized temp directory
USER_TEMP="$HOME/.cache/MbetterClient"
mkdir -p "$USER_TEMP"
# Redirect all temp directories
export TMPDIR="$USER_TEMP"
export TEMP="$USER_TEMP"
export TMP="$USER_TEMP"
export XDG_RUNTIME_DIR="$USER_TEMP"
# Mesa software rendering optimizations
export LIBGL_ALWAYS_SOFTWARE=1
export MESA_GL_VERSION_OVERRIDE=3.3
export MESA_GLSL_VERSION_OVERRIDE=330
export LP_NUM_THREADS=$(nproc) # Use all CPU cores
export MESA_GLSL_CACHE_DISABLE=0 # Enable GLSL caching
export MESA_SHADER_CACHE_DISABLE=0 # Enable shader caching
export MESA_NO_VULKAN=1 # Disable Vulkan in Mesa
# MESA TRANSPARENCY FIXES - Critical for overlay transparency
export MESA_GLX_FORCE_ALPHA=1 # Force alpha channel support
export MESA_GLX_FORCE_TRANSPARENT=1 # Force transparency support
export QT_XCB_GL_INTEGRATION=xcb_egl # Better transparency with Mesa
export QT_QPA_PLATFORM=xcb # Ensure XCB platform for transparency
# Qt WebEngine configuration for software rendering
export QTWEBENGINE_CHROMIUM_FLAGS="--no-sandbox --disable-gpu --disable-gpu-sandbox --disable-dev-shm-usage --disable-software-rasterizer --disable-accelerated-video-decode --disable-accelerated-video-encode --disable-gpu-compositing --disable-gpu-rasterization --disable-vulkan --disable-vulkan-surface --disable-features=Vulkan --user-data-dir=$USER_TEMP --enable-transparent-visuals --disable-background-timer-throttling --disable-renderer-backgrounding --disable-vulkan-fallback"
# Qt configuration for virtualized environments
export QT_OPENGL=software
export QTWEBENGINE_DISABLE_SANDBOX=1
export QT_QPA_PLATFORM=xcb
export QT_XCB_GL_INTEGRATION=xcb_egl # Better compatibility with Mesa
export QT_ENABLE_HIGHDPI_SCALING=0 # Better performance
export QT_QUICK_BACKEND=software # Force software backend for Qt Quick
export QT_QPA_PLATFORM_PLUGIN_PATH="" # Let Qt find plugins automatically
export QT_DEBUG_PLUGINS=0 # Reduce debug output
export QT_LOGGING_RULES="qt.qpa.plugin=false" # Reduce plugin loading messages
# Additional video-specific settings for virtualized environments
export QT_MULTIMEDIA_PREFERRED_PLUGINS="" # Let Qt choose best plugin
export QT_GSTREAMER_PLAYBIN_FLAGS=0 # Disable problematic GStreamer features
export QT_MULTIMEDIA_DISABLE_GSTREAMER=1 # Disable GStreamer backend
export QT_MULTIMEDIA_FORCE_FFMPEG=1 # Force FFmpeg backend
export QT_MULTIMEDIA_DISABLE_VIDEOSINK=1 # Disable video sink in virtual environments
# Vulkan disables
export VK_ICD_FILENAMES=""
export DISABLE_VULKAN=1
export MESA_VK_DISABLE=1
echo "✓ Mesa software rendering configured"
echo "✓ Transparency optimizations enabled"
echo "✓ Multi-threaded rendering: $LP_NUM_THREADS cores"
}
# Function to setup hardware acceleration
setup_hw_acceleration() {
echo "Setting up hardware acceleration..."
# Create temp directory
USER_TEMP="$HOME/.cache/MbetterClient"
mkdir -p "$USER_TEMP"
# Minimal Chromium flags for hardware acceleration
export QTWEBENGINE_CHROMIUM_FLAGS="--user-data-dir=$USER_TEMP"
# Qt configuration for hardware
export QT_QPA_PLATFORM=xcb
export QT_ENABLE_HIGHDPI_SCALING=1
echo "✓ Hardware acceleration enabled"
}
# Main detection and configuration logic
VIRT_TYPE=$(detect_virtualization)
HW_ACCEL=0
if [ -n "$VIRT_TYPE" ]; then
echo "✓ Virtualization detected: $VIRT_TYPE"
NEEDS_SOFTWARE=1
else
echo "✓ Physical hardware detected"
if test_hw_acceleration; then
HW_ACCEL=1
NEEDS_SOFTWARE=0
else
echo "⚠ Hardware acceleration not available or not working"
NEEDS_SOFTWARE=1
fi
fi
# Apply appropriate configuration
if [ $NEEDS_SOFTWARE -eq 1 ]; then
echo ""
echo "=== Applying Software Rendering Configuration ==="
setup_mesa_software
CONFIG_TYPE="Mesa Software Rendering (Optimized)"
else
echo ""
echo "=== Applying Hardware Acceleration Configuration ==="
setup_hw_acceleration
CONFIG_TYPE="Hardware Acceleration"
fi
# Final setup
echo ""
echo "=== Configuration Summary ==="
echo "Environment: $(if [ -n "$VIRT_TYPE" ]; then echo "$VIRT_TYPE VM"; else echo "Physical Hardware"; fi)"
echo "Acceleration: $(if [ $HW_ACCEL -eq 1 ]; then echo "Hardware"; else echo "Software"; fi)"
echo "Rendering: $CONFIG_TYPE"
echo "Temp Directory: $USER_TEMP"
if [ -n "$VIRT_TYPE" ]; then
echo ""
echo "⚠️ VIRTUALIZED ENVIRONMENT DETECTED:"
echo " Video playback may not be available due to graphics limitations."
echo " The application will run in audio-only mode if video fails."
echo " This is normal behavior in virtual machines."
fi
echo ""
# Verify binary exists
if [ ! -f "./MbetterClient" ]; then
echo "ERROR: MbetterClient binary not found in current directory"
echo "Please run this script from the directory containing MbetterClient"
exit 1
fi
echo "Starting MbetterClient..."
echo "=========================================="
# Execute with all configured settings
exec ./MbetterClient "$@"
\ No newline at end of file
# PyQt6 Upgrade Summary
## Overview
Successfully replaced the PyQt5 video player implementation with a comprehensive PyQt6 multi-threaded video player featuring advanced QWebEngineView overlay system and full message bus integration.
## Changes Made
### 1. Core Player Implementation (mbetterclient/qt_player/player.py)
**REPLACED** the entire PyQt5 implementation with PyQt6:
#### Key Components:
- **QtVideoPlayer**: Main threaded component with message bus integration
- **PlayerWindow**: Enhanced main window with hardware-accelerated video playback
- **VideoWidget**: Composite widget combining QVideoWidget + QWebEngineView
- **OverlayWebView**: Custom QWebEngineView with transparent background support
- **OverlayWebChannel**: QObject for bidirectional Python ↔ JavaScript communication
- **PlayerControlsWidget**: Thread-safe video controls with enhanced styling
- **VideoProcessingWorker**: QRunnable for background video processing tasks
#### PyQt6 Features:
- **QMediaPlayer + QAudioOutput**: Modern PyQt6 audio/video architecture
- **QVideoWidget**: Hardware-accelerated video rendering
- **QWebEngineView**: Professional overlay system with CSS3 animations
- **QWebChannel**: Real-time Python ↔ JavaScript communication
- **QMutex + QMutexLocker**: Thread-safe operations
- **QThreadPool**: Managed background processing
### 2. Overlay System (mbetterclient/qt_player/overlay.html)
**CREATED** comprehensive HTML overlay with:
- **CSS3 Keyframe Animations**: Professional title animations with scaling effects
- **JavaScript Integration**: Real-time data updates from Python via QWebChannel
- **HTML5 Canvas**: Custom graphics overlay with particle systems
- **Responsive Design**: Automatic scaling for different resolutions
- **GSAP-ready Structure**: Animation framework integration support
### 3. Legacy Removal (overlay_engine.py)
**REMOVED** the legacy native overlay engine entirely:
- The native Qt overlay implementation was replaced by the superior QWebEngineView system
- All overlay functionality now uses HTML/CSS/JavaScript templates exclusively
- Removed OverlayEngine and OverlayRenderer classes as they are no longer needed
### 4. Message Bus Integration
**ENHANCED** message handling for complete thread communication:
#### Supported Message Types:
- `VIDEO_PLAY`: Play video with optional overlay data
- `VIDEO_PAUSE`: Pause playback
- `VIDEO_STOP`: Stop playback
- `VIDEO_SEEK`: Seek to position
- `VIDEO_VOLUME`: Volume control
- `VIDEO_FULLSCREEN`: Fullscreen toggle
- `TEMPLATE_CHANGE`: Update overlay template
- `OVERLAY_UPDATE`: Real-time overlay data updates
- `STATUS_REQUEST`: Video player status queries
#### Outgoing Messages:
- Progress updates during playback
- Video loaded notifications
- System status responses
- Error notifications
## Technical Improvements
### Thread Safety
- **QMutex Protection**: All shared resources protected with mutexes
- **Thread-Safe Signals**: Position updates, video loading events
- **Background Processing**: Metadata extraction, thumbnail generation
### Performance Optimizations
- **Hardware Acceleration**: Native video decoding when available
- **60 FPS Overlay Rendering**: Smooth animations and updates
- **Memory Management**: Automatic cleanup and resource deallocation
- **Thread Pool**: Configurable concurrent task processing
### Cross-Platform Support
- **Windows**: DirectShow/Media Foundation acceleration
- **macOS**: VideoToolbox acceleration
- **Linux**: VA-API/VDPAU acceleration
## Integration with Existing System
### No Breaking Changes
- The existing `application.py` continues to work seamlessly
- Same `QtVideoPlayer` class name and interface
- Full backward compatibility with message bus system
- All existing API endpoints remain functional
### Enhanced Capabilities
- **Bidirectional Communication**: JavaScript can now send data back to Python
- **Real-time Updates**: Dynamic overlay content updates during playback
- **Professional UI**: Modern video player controls with auto-hide functionality
- **Advanced Overlays**: HTML/CSS/JavaScript-based overlay system
## Usage
### Basic Playback
The existing application works without changes:
```python
python main.py --enable-qt
```
### Video Control via Message Bus
```python
# Play video with overlay
play_message = MessageBuilder.video_play(
sender="web_dashboard",
file_path="/path/to/video.mp4",
overlay_data={
'title': 'Breaking News',
'subtitle': 'Live Coverage',
'ticker': 'Real-time updates...'
}
)
message_bus.publish(play_message)
```
### Dynamic Overlay Updates
```python
# Update overlay in real-time
overlay_message = MessageBuilder.overlay_update(
sender="web_dashboard",
overlay_data={
'title': 'Updated Title',
'showStats': True
}
)
message_bus.publish(overlay_message)
```
## Files Created/Modified
### New Files:
- `mbetterclient/qt_player/overlay.html` - HTML overlay system
- `test_qt6_player.py` - Standalone test application
- `PyQt6_VIDEO_PLAYER_DOCUMENTATION.md` - Comprehensive documentation
### Modified Files:
- `mbetterclient/qt_player/player.py` - **COMPLETELY REPLACED** with PyQt6 implementation
- `mbetterclient/qt_player/__init__.py` - Removed legacy overlay engine imports
### Removed Files:
- `mbetterclient/qt_player/overlay_engine.py` - Legacy native overlay implementation removed
### Unchanged Files:
- `mbetterclient/core/application.py` - Works seamlessly with new implementation
- All other application components remain fully functional
## Testing
### Standalone Test:
```bash
python test_qt6_player.py standalone
```
### Full Integration Test:
```bash
python main.py --enable-qt --enable-web
```
## Result
**Complete PyQt6 upgrade successful**
**Full message bus integration maintained**
**Enhanced overlay capabilities added**
**Thread-safe operations implemented**
**Cross-platform compatibility ensured**
**No breaking changes to existing system**
The MbetterClient application now features a professional-grade, PyQt6-based video player with advanced HTML overlay capabilities while maintaining full compatibility with the existing multi-threaded architecture and message bus communication system.
\ No newline at end of file
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
# Netscape HTTP Cookie File
# https://curl.se/docs/http-cookies.html
# This file was generated by libcurl! Edit at your own risk.
#HttpOnly_127.0.0.1 FALSE / FALSE 0 session .eJwlzj0OwjAMQOG7ZGaof2LHvUyVxLZgTemEuDuV2N-Tvk85csX5LPt7XfEox8vLXoIZkY2FESapD3Yga6kIHNw6JUnidGftW6ZynVul6BGm5lBTEkzlrprNmB6OTdQbior3SjrqmDxti_tRkQ5JqTCEOKyjlBtynbH-GizfH6DyL20.aKzyUQ.JbC1Zst6dLOPb6oWBcnMziVCKFk
#HttpOnly_127.0.0.1 FALSE / FALSE 1787700689 remember_token 2|6839ea0e127e93742c1d52ba7c14f92c9b5b4e4a2fe97bb74c5b862f79ef1ce97a0297989c74f951dd7eba7340fd23ec4cb5e25a37c3dc28d53f13921f5f0776
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
"""
REST API Client for MbetterClient
This module provides a threaded HTTP client for making configurable requests
to external APIs with proper error handling and retry logic.
"""
from .client import APIClient
__all__ = ['APIClient']
\ No newline at end of file
This diff is collapsed.
../assets/
\ No newline at end of file
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment