Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Contribute to GitLab
Sign in
Toggle navigation
C
coderai
Project
Project
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
nexlab
coderai
Commits
acf62437
Commit
acf62437
authored
Mar 14, 2026
by
Your Name
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Make --loadswap preload models like --loadall for Vulkan backend
parent
07f7a4d3
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
27 additions
and
11 deletions
+27
-11
coderai
coderai
+27
-11
No files found.
coderai
View file @
acf62437
...
...
@@ -4702,9 +4702,10 @@ def main():
# Pre-load models based on mode
print(f"
DEBUG
:
load_mode
at
line
4710
=
'{load_mode}'
")
if load_mode == "
loadall
":
# Load all models into VRAM up to full capacity, then offload to CPU RAM
print("
\
n
===
Load
All
Mode
===
")
if load_mode in ("
loadall
", "
loadswap
"):
# Load all models into VRAM (or RAM for CUDA loadswap)
mode_name = "
Load
All
" if load_mode == "
loadall
" else "
Load
Swap
"
print(f"
\
n
===
{
mode_name
}
Mode
===
")
# Load main text model first
if model_names:
...
...
@@ -4973,7 +4974,22 @@ def main():
elif
load_mode
==
"loadswap"
:
#
Load
models
in
order
:
model
>
image
>
audio
>
TTS
,
keep
active
in
VRAM
#
For
Vulkan
backend
,
load
all
models
to
VRAM
like
loadall
(
VRAM
is
not
limited
like
CUDA
)
print
(
"
\n
=== Load Swap Mode ==="
)
#
For
Vulkan
,
use
same
preloading
as
loadall
if
args
.
backend
==
"vulkan"
:
#
Vulkan
:
Load
all
models
to
GPU
like
loadall
if
model_names
:
print
(
f
"Pre-loading main text model: {model_names[0]}"
)
if
image_models
:
print
(
f
"Pre-loading image model: {image_models[0]}"
)
if
audio_models
:
print
(
f
"Pre-loading audio model: {audio_models[0]}"
)
if
args
.
tts_model
:
print
(
f
"Pre-loading TTS model: {args.tts_model}"
)
else
:
#
NVIDIA
/
CUDA
:
First
model
in
VRAM
,
others
in
RAM
if
model_names
:
print
(
f
"Main text model will be in VRAM: {model_names[0]}"
)
if
image_models
:
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment