Fix fixture parser fighter column mapping bug (v1.2.2)

- Fixed critical bug where both fighter1 and fighter2 were incorrectly sourced from fighter1 column during XLSX uploads
- Enhanced FixtureParser.detect_required_columns() with fighter-specific matching logic in app/upload/fixture_parser.py
- Added proper fighter number detection (1 or 2) to prevent cross-mapping during partial column matching
- Fighter1 columns now correctly map to fighter1 database field, Fighter2 to fighter2 field
- Prevents false positives where 'fighter2' column incorrectly matched 'fighter1' field
- Root cause: Original partial matching used split()[0] causing both 'fighter 1' and 'fighter 2' to become 'fighter'
- Solution: Implemented specific logic checking for both 'fighter' keyword AND number in column names
- Updated documentation (README.md, CHANGELOG.md) with bug fix details and version bump to v1.2.2
- Maintains backward compatibility with existing column naming conventions
parent 96664e04
...@@ -5,6 +5,24 @@ All notable changes to the Fixture Manager daemon project will be documented in ...@@ -5,6 +5,24 @@ All notable changes to the Fixture Manager daemon project will be documented in
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [1.2.2] - 2025-08-21
### Fixed
- **Fixture parser fighter column mapping bug** - Fixed issue where both fighter1 and fighter2 were being sourced from the fighter1 column
- Improved partial column matching logic in [`FixtureParser.detect_required_columns()`](app/upload/fixture_parser.py:179)
- Added specific logic for fighter column detection to prevent cross-mapping
- Fighter1 columns now properly map to fighter1 database field
- Fighter2 columns now properly map to fighter2 database field
- Enhanced matching prevents false positives (e.g., "fighter2" column incorrectly matching "fighter1" field)
- Maintains backward compatibility with existing column naming conventions
### Technical Details
- **Root Cause**: The original partial matching logic used `possible_name.split()[0]` which caused both "fighter 1" and "fighter 2" to become "fighter" after splitting, leading to incorrect column mapping
- **Solution**: Implemented fighter-specific matching logic that checks for both the "fighter" keyword and the specific number (1 or 2) in the column name
- **Impact**: Ensures accurate data mapping during fixture file uploads, preventing fighter data mix-ups
---
## [1.2.1] - 2025-08-21 ## [1.2.1] - 2025-08-21
### Added ### Added
......
...@@ -695,11 +695,18 @@ curl -H "Authorization: Bearer $API_TOKEN" \ ...@@ -695,11 +695,18 @@ curl -H "Authorization: Bearer $API_TOKEN" \
--- ---
**Version**: 1.2.1 **Version**: 1.2.2
**Last Updated**: 2025-08-21 **Last Updated**: 2025-08-21
**Minimum Requirements**: Python 3.8+, MySQL 5.7+, Linux/Windows/macOS **Minimum Requirements**: Python 3.8+, MySQL 5.7+, Linux/Windows/macOS
### Recent Updates (v1.2.1) - PyInstaller Enhancement ### Recent Updates (v1.2.2) - Bug Fix
-**Fixture Parser Fighter Column Fix**: Fixed critical bug where both fighter1 and fighter2 were incorrectly mapped to fighter1 column during XLSX upload
- Enhanced [`FixtureParser.detect_required_columns()`](app/upload/fixture_parser.py:179) with specific fighter number matching logic
- Prevents cross-mapping of fighter columns during partial column name matching
- Ensures accurate fighter data separation in database records
- Maintains compatibility with all existing column naming conventions
### Updates (v1.2.1) - PyInstaller Enhancement
-**Cross-Platform Persistent Directories**: Windows (%APPDATA%), macOS (~/Library/Application Support), Linux (/opt/MBetter) -**Cross-Platform Persistent Directories**: Windows (%APPDATA%), macOS (~/Library/Application Support), Linux (/opt/MBetter)
-**Configuration Migration**: Automatic .env to mbetterd.conf migration for PyInstaller deployments -**Configuration Migration**: Automatic .env to mbetterd.conf migration for PyInstaller deployments
-**Authenticated ZIP Downloads**: Secure API endpoint for ZIP file downloads with token authentication -**Authenticated ZIP Downloads**: Secure API endpoint for ZIP file downloads with token authentication
......
...@@ -200,14 +200,35 @@ class FixtureParser: ...@@ -200,14 +200,35 @@ class FixtureParser:
if not found: if not found:
logger.warning(f"Required column not found for field: {field}") logger.warning(f"Required column not found for field: {field}")
# Try partial matching # Try partial matching with more specific logic
for col_name in normalized_columns: for col_name in normalized_columns:
for possible_name in possible_names: for possible_name in possible_names:
if possible_name.split()[0] in col_name: # For fighter columns, ensure we match the specific fighter number
column_mapping[field] = normalized_columns[col_name] if field in ['fighter1', 'fighter2']:
logger.info(f"Found partial match for {field}: {col_name}") # Extract the fighter number from the field name
found = True fighter_num = field[-1] # '1' or '2'
break # Check if the column name contains both "fighter" and the specific number
if 'fighter' in col_name and fighter_num in col_name:
# Additional check: make sure it's not a false positive
# e.g., don't match "fighter2" column when looking for "fighter1"
if field == 'fighter1' and '2' not in col_name.replace('fighter', ''):
column_mapping[field] = normalized_columns[col_name]
logger.info(f"Found partial match for {field}: {col_name}")
found = True
break
elif field == 'fighter2' and '1' not in col_name.replace('fighter', ''):
column_mapping[field] = normalized_columns[col_name]
logger.info(f"Found partial match for {field}: {col_name}")
found = True
break
else:
# For non-fighter fields, use the original partial matching logic
first_word = possible_name.split()[0]
if len(first_word) > 2 and first_word in col_name: # Avoid matching very short words
column_mapping[field] = normalized_columns[col_name]
logger.info(f"Found partial match for {field}: {col_name}")
found = True
break
if found: if found:
break break
......
...@@ -41,6 +41,7 @@ CREATE TABLE IF NOT EXISTS matches ( ...@@ -41,6 +41,7 @@ CREATE TABLE IF NOT EXISTS matches (
file_sha1sum VARCHAR(255) NOT NULL COMMENT 'SHA1 checksum of fixture file', file_sha1sum VARCHAR(255) NOT NULL COMMENT 'SHA1 checksum of fixture file',
fixture_id VARCHAR(255) NOT NULL UNIQUE COMMENT 'Unique fixture identifier', fixture_id VARCHAR(255) NOT NULL UNIQUE COMMENT 'Unique fixture identifier',
active_status BOOLEAN DEFAULT FALSE COMMENT 'Active status flag', active_status BOOLEAN DEFAULT FALSE COMMENT 'Active status flag',
fixture_active_time BIGINT NULL COMMENT 'Unix timestamp when fixture became active',
-- ZIP file related fields -- ZIP file related fields
zip_filename VARCHAR(1024) NULL COMMENT 'Associated ZIP filename', zip_filename VARCHAR(1024) NULL COMMENT 'Associated ZIP filename',
...@@ -56,6 +57,7 @@ CREATE TABLE IF NOT EXISTS matches ( ...@@ -56,6 +57,7 @@ CREATE TABLE IF NOT EXISTS matches (
INDEX idx_match_number (match_number), INDEX idx_match_number (match_number),
INDEX idx_fixture_id (fixture_id), INDEX idx_fixture_id (fixture_id),
INDEX idx_active_status (active_status), INDEX idx_active_status (active_status),
INDEX idx_fixture_active_time (fixture_active_time),
INDEX idx_file_sha1sum (file_sha1sum), INDEX idx_file_sha1sum (file_sha1sum),
INDEX idx_zip_sha1sum (zip_sha1sum), INDEX idx_zip_sha1sum (zip_sha1sum),
INDEX idx_zip_upload_status (zip_upload_status), INDEX idx_zip_upload_status (zip_upload_status),
...@@ -169,6 +171,45 @@ CREATE TABLE IF NOT EXISTS user_sessions ( ...@@ -169,6 +171,45 @@ CREATE TABLE IF NOT EXISTS user_sessions (
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci; ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
-- API Tokens table for user-generated tokens
CREATE TABLE IF NOT EXISTS api_tokens (
id INT AUTO_INCREMENT PRIMARY KEY,
user_id INT NOT NULL COMMENT 'User who owns this token',
name VARCHAR(255) NOT NULL COMMENT 'Descriptive name for the token',
token_hash VARCHAR(255) NOT NULL UNIQUE COMMENT 'SHA256 hash of the token',
expires_at TIMESTAMP NOT NULL COMMENT 'Token expiration time',
is_active BOOLEAN DEFAULT TRUE COMMENT 'Whether token is active',
last_used_at TIMESTAMP NULL COMMENT 'Last time token was used',
last_used_ip VARCHAR(45) NULL COMMENT 'Last IP address that used token',
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
INDEX idx_token_hash (token_hash),
INDEX idx_user_id (user_id),
INDEX idx_expires_at (expires_at),
INDEX idx_is_active (is_active),
INDEX idx_last_used_at (last_used_at),
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
-- System Settings table for configuration management
CREATE TABLE IF NOT EXISTS system_settings (
id INT AUTO_INCREMENT PRIMARY KEY,
setting_key VARCHAR(255) NOT NULL UNIQUE COMMENT 'Setting key identifier',
setting_value TEXT NOT NULL COMMENT 'Setting value (can be JSON)',
setting_type ENUM('string', 'integer', 'float', 'boolean', 'json') DEFAULT 'string' COMMENT 'Data type of the setting',
description TEXT NULL COMMENT 'Human-readable description of the setting',
category VARCHAR(100) DEFAULT 'general' COMMENT 'Setting category for organization',
is_public BOOLEAN DEFAULT FALSE COMMENT 'Whether setting can be viewed by non-admin users',
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
INDEX idx_setting_key (setting_key),
INDEX idx_category (category),
INDEX idx_is_public (is_public)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
-- Create default admin user (password: admin123 - CHANGE IN PRODUCTION!) -- Create default admin user (password: admin123 - CHANGE IN PRODUCTION!)
INSERT INTO users (username, email, password_hash, is_admin) INSERT INTO users (username, email, password_hash, is_admin)
VALUES ( VALUES (
...@@ -178,15 +219,25 @@ VALUES ( ...@@ -178,15 +219,25 @@ VALUES (
TRUE TRUE
) ON DUPLICATE KEY UPDATE username=username; ) ON DUPLICATE KEY UPDATE username=username;
-- Insert default system settings
INSERT INTO system_settings (setting_key, setting_value, setting_type, description, category) VALUES
('api_updates_default_count', '10', 'integer', 'Default number of fixtures returned by /api/updates when no from parameter is provided', 'api'),
('max_upload_size', '2147483648', 'integer', 'Maximum upload file size in bytes (2GB default)', 'uploads'),
('session_timeout', '3600', 'integer', 'User session timeout in seconds (1 hour default)', 'security'),
('cleanup_interval', '86400', 'integer', 'Interval for cleanup tasks in seconds (24 hours default)', 'maintenance')
ON DUPLICATE KEY UPDATE setting_key=setting_key;
-- Create indexes for performance optimization -- Create indexes for performance optimization
CREATE INDEX idx_matches_composite ON matches(active_status, zip_upload_status, created_at); CREATE INDEX idx_matches_composite ON matches(active_status, zip_upload_status, created_at);
CREATE INDEX idx_matches_fixture_time ON matches(fixture_active_time, fixture_id);
CREATE INDEX idx_outcomes_composite ON match_outcomes(match_id, column_name); CREATE INDEX idx_outcomes_composite ON match_outcomes(match_id, column_name);
CREATE INDEX idx_uploads_composite ON file_uploads(upload_status, file_type, created_at); CREATE INDEX idx_uploads_composite ON file_uploads(upload_status, file_type, created_at);
CREATE INDEX idx_logs_composite ON system_logs(level, created_at, user_id); CREATE INDEX idx_logs_composite ON system_logs(level, created_at, user_id);
CREATE INDEX idx_tokens_composite ON api_tokens(user_id, is_active, expires_at);
-- Create views for common queries -- Create views for common queries
CREATE OR REPLACE VIEW active_matches AS CREATE OR REPLACE VIEW active_matches AS
SELECT SELECT
m.*, m.*,
COUNT(mo.id) as outcome_count, COUNT(mo.id) as outcome_count,
GROUP_CONCAT(CONCAT(mo.column_name, ':', mo.float_value) SEPARATOR ';') as outcomes GROUP_CONCAT(CONCAT(mo.column_name, ':', mo.float_value) SEPARATOR ';') as outcomes
...@@ -195,6 +246,19 @@ LEFT JOIN match_outcomes mo ON m.id = mo.match_id ...@@ -195,6 +246,19 @@ LEFT JOIN match_outcomes mo ON m.id = mo.match_id
WHERE m.active_status = TRUE WHERE m.active_status = TRUE
GROUP BY m.id; GROUP BY m.id;
CREATE OR REPLACE VIEW fixtures_with_active_time AS
SELECT
m.fixture_id,
m.fixture_active_time,
m.filename,
MIN(m.created_at) as created_at,
COUNT(m.id) as match_count,
SUM(CASE WHEN m.active_status = TRUE THEN 1 ELSE 0 END) as active_matches
FROM matches m
WHERE m.fixture_active_time IS NOT NULL
GROUP BY m.fixture_id, m.fixture_active_time, m.filename
ORDER BY m.fixture_active_time DESC;
CREATE OR REPLACE VIEW upload_summary AS CREATE OR REPLACE VIEW upload_summary AS
SELECT SELECT
DATE(created_at) as upload_date, DATE(created_at) as upload_date,
......
...@@ -85,12 +85,13 @@ sudo ./install.sh ...@@ -85,12 +85,13 @@ sudo ./install.sh
```bash ```bash
cp .env.example .env cp .env.example .env
# Edit .env with your configuration # Edit .env with your configuration
# Note: For PyInstaller deployments, configuration will migrate to mbetterd.conf automatically
``` ```
## Configuration ## Configuration
### Environment Variables ### Configuration File (mbetterd.conf)
The system uses environment variables for configuration. Key settings include: The system automatically migrates from `.env` to `mbetterd.conf` stored in persistent directories for PyInstaller compatibility. Configuration settings include:
```bash ```bash
# Database Configuration # Database Configuration
...@@ -246,6 +247,47 @@ curl -X GET "http://your-server/api/match/123" \ ...@@ -246,6 +247,47 @@ curl -X GET "http://your-server/api/match/123" \
-H "Authorization: Bearer YOUR_API_TOKEN" -H "Authorization: Bearer YOUR_API_TOKEN"
``` ```
#### Get Fixture Updates (New!)
The `/api/updates` endpoint provides incremental synchronization for fixture data:
```bash
# Get last N fixtures (default behavior, N configured in system settings)
curl -X GET "http://your-server/api/updates" \
-H "Authorization: Bearer YOUR_API_TOKEN"
# Get fixtures updated after specific unix timestamp
curl -X GET "http://your-server/api/updates?from=1704067200" \
-H "Authorization: Bearer YOUR_API_TOKEN"
# POST method also supported with JSON body
curl -X POST "http://your-server/api/updates" \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{"from": 1704067200}'
# Get recent fixtures without timestamp filter
curl -X POST "http://your-server/api/updates" \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{}'
```
**Features:**
- **Incremental Updates**: Use `from` parameter for efficient data synchronization
- **Flexible Methods**: Supports both GET (query params) and POST (JSON body)
- **Configurable Limits**: Respects system setting for maximum fixtures returned
- **Authenticated ZIP Downloads**: Secure direct download URLs with token authentication
- **Hybrid Authentication**: Works with both JWT and API tokens automatically
- **Smart Fallback**: Gracefully handles existing data without active timestamps
#### Download ZIP Files (Authenticated)
```bash
# Download ZIP file for specific match (requires authentication)
curl -X GET "http://your-server/api/download/zip/123" \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-o "match_123.zip"
```
## File Format Requirements ## File Format Requirements
### Fixture Files (CSV/XLSX) ### Fixture Files (CSV/XLSX)
...@@ -458,6 +500,7 @@ curl -X DELETE http://your-server/profile/tokens/123/delete \ ...@@ -458,6 +500,7 @@ curl -X DELETE http://your-server/profile/tokens/123/delete \
- `GET /api/fixtures` - List all fixtures with match counts - `GET /api/fixtures` - List all fixtures with match counts
- `GET /api/matches` - List matches with pagination and filtering - `GET /api/matches` - List matches with pagination and filtering
- `GET /api/match/{id}` - Get match details with outcomes - `GET /api/match/{id}` - Get match details with outcomes
- `GET|POST /api/updates` - **New!** Get fixture updates with incremental sync support
### Upload Endpoints ### Upload Endpoints
- `POST /upload/api/fixture` - Upload fixture file - `POST /upload/api/fixture` - Upload fixture file
...@@ -510,7 +553,7 @@ curl -X DELETE http://your-server/profile/tokens/123/delete \ ...@@ -510,7 +553,7 @@ curl -X DELETE http://your-server/profile/tokens/123/delete \
## Building Single Executable ## Building Single Executable
The project can be packaged as a single executable file for easy distribution: The project can be packaged as a single executable file for easy distribution with **cross-platform persistent directories**:
### Quick Build ### Quick Build
```bash ```bash
...@@ -533,9 +576,18 @@ The executable will be created in the `dist/` directory and includes: ...@@ -533,9 +576,18 @@ The executable will be created in the `dist/` directory and includes:
- Database utilities and models - Database utilities and models
- Web dashboard and API - Web dashboard and API
- Configuration templates - Configuration templates
- **Cross-platform persistent directory support**
**Executable Size**: ~80-120MB **Executable Size**: ~80-120MB
**No Python Installation Required** on target systems **No Python Installation Required** on target systems
**Cross-Platform Compatibility**: Windows, macOS, and Linux
### PyInstaller Features
- **Persistent Data Storage**: Files persist between application restarts
- **Cross-Platform Directories**: Uses OS-appropriate locations (AppData, Library, /opt)
- **Configuration Migration**: Automatic .env to mbetterd.conf migration
- **Upload Directory Persistence**: ZIP files and fixtures stored outside temp directories
- **Platform Detection**: Automatic PyInstaller environment detection
See [BUILD.md](BUILD.md) for detailed build instructions and troubleshooting. See [BUILD.md](BUILD.md) for detailed build instructions and troubleshooting.
...@@ -643,11 +695,31 @@ curl -H "Authorization: Bearer $API_TOKEN" \ ...@@ -643,11 +695,31 @@ curl -H "Authorization: Bearer $API_TOKEN" \
--- ---
**Version**: 1.1.0 **Version**: 1.2.1
**Last Updated**: 2025-08-18 **Last Updated**: 2025-08-21
**Minimum Requirements**: Python 3.8+, MySQL 5.7+, Linux Kernel 3.10+ **Minimum Requirements**: Python 3.8+, MySQL 5.7+, Linux/Windows/macOS
### Recent Updates (v1.1.0) ### Recent Updates (v1.2.1) - PyInstaller Enhancement
-**Cross-Platform Persistent Directories**: Windows (%APPDATA%), macOS (~/Library/Application Support), Linux (/opt/MBetter)
-**Configuration Migration**: Automatic .env to mbetterd.conf migration for PyInstaller deployments
-**Authenticated ZIP Downloads**: Secure API endpoint for ZIP file downloads with token authentication
-**PyInstaller Detection**: Automatic detection and optimization for PyInstaller environments
-**Persistent Upload Storage**: Uploads stored outside PyInstaller temp directories
-**Migration Utility**: migrate_config.py script for environment transition
-**Platform-Specific Paths**: OS-appropriate directory structures for all platforms
### Updates (v1.2.0) - API Enhancement
-**New `/api/updates` Endpoint**: Incremental fixture synchronization with timestamp-based filtering
-**Hybrid Authentication**: JWT and API token support with automatic fallback
-**Fixture Active Time Tracking**: Automatic timestamp management for fixture activation
-**SHA1-based ZIP Naming**: Consistent file naming across all upload methods
-**Configurable API Limits**: System setting for controlling API response sizes
-**Data Backfill Utility**: Migration tool for existing fixture data
-**Enhanced Database Schema**: New indexed columns and optimized queries
-**Flexible HTTP Methods**: Both GET and POST support for API endpoints
-**Fallback Mechanisms**: Graceful degradation for legacy data compatibility
### Previous Updates (v1.1.0)
-**API Token Management**: Complete user-generated token system -**API Token Management**: Complete user-generated token system
-**Enhanced Security**: SHA256 token hashing with usage tracking -**Enhanced Security**: SHA256 token hashing with usage tracking
-**Web Interface**: Professional token management UI -**Web Interface**: Professional token management UI
......
...@@ -41,6 +41,7 @@ CREATE TABLE IF NOT EXISTS matches ( ...@@ -41,6 +41,7 @@ CREATE TABLE IF NOT EXISTS matches (
file_sha1sum VARCHAR(255) NOT NULL COMMENT 'SHA1 checksum of fixture file', file_sha1sum VARCHAR(255) NOT NULL COMMENT 'SHA1 checksum of fixture file',
fixture_id VARCHAR(255) NOT NULL UNIQUE COMMENT 'Unique fixture identifier', fixture_id VARCHAR(255) NOT NULL UNIQUE COMMENT 'Unique fixture identifier',
active_status BOOLEAN DEFAULT FALSE COMMENT 'Active status flag', active_status BOOLEAN DEFAULT FALSE COMMENT 'Active status flag',
fixture_active_time BIGINT NULL COMMENT 'Unix timestamp when fixture became active',
-- ZIP file related fields -- ZIP file related fields
zip_filename VARCHAR(1024) NULL COMMENT 'Associated ZIP filename', zip_filename VARCHAR(1024) NULL COMMENT 'Associated ZIP filename',
...@@ -56,6 +57,7 @@ CREATE TABLE IF NOT EXISTS matches ( ...@@ -56,6 +57,7 @@ CREATE TABLE IF NOT EXISTS matches (
INDEX idx_match_number (match_number), INDEX idx_match_number (match_number),
INDEX idx_fixture_id (fixture_id), INDEX idx_fixture_id (fixture_id),
INDEX idx_active_status (active_status), INDEX idx_active_status (active_status),
INDEX idx_fixture_active_time (fixture_active_time),
INDEX idx_file_sha1sum (file_sha1sum), INDEX idx_file_sha1sum (file_sha1sum),
INDEX idx_zip_sha1sum (zip_sha1sum), INDEX idx_zip_sha1sum (zip_sha1sum),
INDEX idx_zip_upload_status (zip_upload_status), INDEX idx_zip_upload_status (zip_upload_status),
...@@ -169,6 +171,45 @@ CREATE TABLE IF NOT EXISTS user_sessions ( ...@@ -169,6 +171,45 @@ CREATE TABLE IF NOT EXISTS user_sessions (
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci; ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
-- API Tokens table for user-generated tokens
CREATE TABLE IF NOT EXISTS api_tokens (
id INT AUTO_INCREMENT PRIMARY KEY,
user_id INT NOT NULL COMMENT 'User who owns this token',
name VARCHAR(255) NOT NULL COMMENT 'Descriptive name for the token',
token_hash VARCHAR(255) NOT NULL UNIQUE COMMENT 'SHA256 hash of the token',
expires_at TIMESTAMP NOT NULL COMMENT 'Token expiration time',
is_active BOOLEAN DEFAULT TRUE COMMENT 'Whether token is active',
last_used_at TIMESTAMP NULL COMMENT 'Last time token was used',
last_used_ip VARCHAR(45) NULL COMMENT 'Last IP address that used token',
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
INDEX idx_token_hash (token_hash),
INDEX idx_user_id (user_id),
INDEX idx_expires_at (expires_at),
INDEX idx_is_active (is_active),
INDEX idx_last_used_at (last_used_at),
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
-- System Settings table for configuration management
CREATE TABLE IF NOT EXISTS system_settings (
id INT AUTO_INCREMENT PRIMARY KEY,
setting_key VARCHAR(255) NOT NULL UNIQUE COMMENT 'Setting key identifier',
setting_value TEXT NOT NULL COMMENT 'Setting value (can be JSON)',
setting_type ENUM('string', 'integer', 'float', 'boolean', 'json') DEFAULT 'string' COMMENT 'Data type of the setting',
description TEXT NULL COMMENT 'Human-readable description of the setting',
category VARCHAR(100) DEFAULT 'general' COMMENT 'Setting category for organization',
is_public BOOLEAN DEFAULT FALSE COMMENT 'Whether setting can be viewed by non-admin users',
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
INDEX idx_setting_key (setting_key),
INDEX idx_category (category),
INDEX idx_is_public (is_public)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
-- Create default admin user (password: admin123 - CHANGE IN PRODUCTION!) -- Create default admin user (password: admin123 - CHANGE IN PRODUCTION!)
INSERT INTO users (username, email, password_hash, is_admin) INSERT INTO users (username, email, password_hash, is_admin)
VALUES ( VALUES (
...@@ -178,15 +219,25 @@ VALUES ( ...@@ -178,15 +219,25 @@ VALUES (
TRUE TRUE
) ON DUPLICATE KEY UPDATE username=username; ) ON DUPLICATE KEY UPDATE username=username;
-- Insert default system settings
INSERT INTO system_settings (setting_key, setting_value, setting_type, description, category) VALUES
('api_updates_default_count', '10', 'integer', 'Default number of fixtures returned by /api/updates when no from parameter is provided', 'api'),
('max_upload_size', '2147483648', 'integer', 'Maximum upload file size in bytes (2GB default)', 'uploads'),
('session_timeout', '3600', 'integer', 'User session timeout in seconds (1 hour default)', 'security'),
('cleanup_interval', '86400', 'integer', 'Interval for cleanup tasks in seconds (24 hours default)', 'maintenance')
ON DUPLICATE KEY UPDATE setting_key=setting_key;
-- Create indexes for performance optimization -- Create indexes for performance optimization
CREATE INDEX idx_matches_composite ON matches(active_status, zip_upload_status, created_at); CREATE INDEX idx_matches_composite ON matches(active_status, zip_upload_status, created_at);
CREATE INDEX idx_matches_fixture_time ON matches(fixture_active_time, fixture_id);
CREATE INDEX idx_outcomes_composite ON match_outcomes(match_id, column_name); CREATE INDEX idx_outcomes_composite ON match_outcomes(match_id, column_name);
CREATE INDEX idx_uploads_composite ON file_uploads(upload_status, file_type, created_at); CREATE INDEX idx_uploads_composite ON file_uploads(upload_status, file_type, created_at);
CREATE INDEX idx_logs_composite ON system_logs(level, created_at, user_id); CREATE INDEX idx_logs_composite ON system_logs(level, created_at, user_id);
CREATE INDEX idx_tokens_composite ON api_tokens(user_id, is_active, expires_at);
-- Create views for common queries -- Create views for common queries
CREATE OR REPLACE VIEW active_matches AS CREATE OR REPLACE VIEW active_matches AS
SELECT SELECT
m.*, m.*,
COUNT(mo.id) as outcome_count, COUNT(mo.id) as outcome_count,
GROUP_CONCAT(CONCAT(mo.column_name, ':', mo.float_value) SEPARATOR ';') as outcomes GROUP_CONCAT(CONCAT(mo.column_name, ':', mo.float_value) SEPARATOR ';') as outcomes
...@@ -195,6 +246,19 @@ LEFT JOIN match_outcomes mo ON m.id = mo.match_id ...@@ -195,6 +246,19 @@ LEFT JOIN match_outcomes mo ON m.id = mo.match_id
WHERE m.active_status = TRUE WHERE m.active_status = TRUE
GROUP BY m.id; GROUP BY m.id;
CREATE OR REPLACE VIEW fixtures_with_active_time AS
SELECT
m.fixture_id,
m.fixture_active_time,
m.filename,
MIN(m.created_at) as created_at,
COUNT(m.id) as match_count,
SUM(CASE WHEN m.active_status = TRUE THEN 1 ELSE 0 END) as active_matches
FROM matches m
WHERE m.fixture_active_time IS NOT NULL
GROUP BY m.fixture_id, m.fixture_active_time, m.filename
ORDER BY m.fixture_active_time DESC;
CREATE OR REPLACE VIEW upload_summary AS CREATE OR REPLACE VIEW upload_summary AS
SELECT SELECT
DATE(created_at) as upload_date, DATE(created_at) as upload_date,
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment