• Your Name's avatar
    Centralize all model loading/downloading logic in codai.models.cache · c93d4a6b
    Your Name authored
    - Added unified load_model() function as main entry point for model loading
    - Updated WhisperServerManager to use centralized load_model() instead of inline logic
    - Removed proxy methods from MultiModelManager - use cache module directly
    - All cache functions now work seamlessly with both GGUF and HF model caches
    - Improved separation of concerns: cache module handles all caching/downloading
    c93d4a6b
manager.py 36 KB