Centralize all model loading/downloading logic in codai.models.cache
- Added unified load_model() function as main entry point for model loading - Updated WhisperServerManager to use centralized load_model() instead of inline logic - Removed proxy methods from MultiModelManager - use cache module directly - All cache functions now work seamlessly with both GGUF and HF model caches - Improved separation of concerns: cache module handles all caching/downloading
Showing
Please
register
or
sign in
to comment