Update SKILL.md with new CLI options and MCP documentation

parent c2ef12d3
......@@ -96,6 +96,9 @@ python3 videogen --model-list --high-vram # >30GB VRAM
python3 videogen --model-list --huge-vram # >55GB VRAM
python3 videogen --model-list --nsfw-friendly
# Batch output (for scripts)
python3 videogen --model-list --model-list-batch
# Disable a model from auto selection
python3 videogen --disable-model <ID_or_name>
......@@ -132,6 +135,16 @@ python3 videogen --model wan_14b_t2v --prompt "..." --vram_limit 16
python3 videogen --model wan_14b_t2v --prompt "..." --low_ram_mode
```
#### Output Options
```bash
# Specify output directory for batch processing
python3 videogen --model wan_14b_t2v --prompt "..." --output-dir /path/to/output
# Auto-confirm prompts (useful for scripts)
python3 videogen --model wan_14b_t2v --prompt "..." --yes
```
### Auto Mode
Auto mode is the easiest way for AI agents to generate content:
......@@ -329,6 +342,11 @@ python3 videogen --video input.mp4 --transcribe --whisper-model large
# Specify source language
python3 videogen --video input.mp4 --transcribe --source-lang en
# Audio chunking strategies for long videos
python3 videogen --video input.mp4 --transcribe --audio-chunk overlap # 60s chunks with 2s overlap
python3 videogen --video input.mp4 --transcribe --audio-chunk word-boundary # Split at word boundaries
python3 videogen --video input.mp4 --transcribe --audio-chunk vad # Skip silence with VAD
```
### Create Subtitles
......@@ -630,6 +648,52 @@ print(f"Model: {data['model']}")
---
## MCP Server Integration
VideoGen provides an MCP (Model Context Protocol) server for programmatic access:
```bash
# Start the MCP server
python3 videogen_mcp_server.py
```
### MCP Tools Available
The MCP server exposes the following tools:
| Tool | Description |
|------|-------------|
| `videogen_list_models` | List available models (supports filter and batch parameters) |
| `videogen_show_model` | Show model details by ID or name |
| `videogen_generate_video` | Generate video from text (T2V) |
| `videogen_generate_image` | Generate image from text (T2I) |
| `videogen_animate_image` | Animate image (I2V) |
| `videogen_transform_image` | Transform image (I2I) |
| `videogen_generate_with_audio` | Generate video with audio |
| `videogen_transcribe_video` | Transcribe video audio |
| `videogen_create_subtitles` | Create subtitles |
| `videogen_dub_video` | Dub/translate video |
| `videogen_search_models` | Search HuggingFace for models |
| `videogen_add_model` | Add custom model |
| `videogen_update_models` | Update model database |
### MCP Parameters
New MCP parameters for the tools:
```python
# List models with batch output
{"filter": "t2v", "batch": true}
# Generate with output directory
{"model": "wan_14b_t2v", "prompt": "...", "output_dir": "/path/to/output", "yes": true}
# Transcribe with audio chunking
{"video": "input.mp4", "audio_chunk": "overlap"} # or "word-boundary" or "vad"
```
---
## Best Practices for AI Agents
1. **Update models first** - Run `--update-models` before first use
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment