Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Contribute to GitLab
Sign in
Toggle navigation
V
vidai
Project
Project
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
SexHackMe
vidai
Commits
d53965a6
Commit
d53965a6
authored
Oct 09, 2025
by
Stefy Lanza (nextime / spora )
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Fix unpacking error in model.generate() calls by handling variable tuple lengths
parent
2a6d39b2
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
25 additions
and
4 deletions
+25
-4
worker_analysis.py
vidai/worker_analysis.py
+25
-4
No files found.
vidai/worker_analysis.py
View file @
d53965a6
...
@@ -155,11 +155,18 @@ def analyze_single_image(image_path, prompt, model):
...
@@ -155,11 +155,18 @@ def analyze_single_image(image_path, prompt, model):
gen_result
=
model
.
generate
({
"messages"
:
messages
},
max_new_tokens
=
128
)
gen_result
=
model
.
generate
({
"messages"
:
messages
},
max_new_tokens
=
128
)
if
isinstance
(
gen_result
,
tuple
):
if
isinstance
(
gen_result
,
tuple
):
result
,
tokens_used
=
gen_result
if
len
(
gen_result
)
>=
2
:
result
,
tokens_used
=
gen_result
[
0
],
gen_result
[
1
]
elif
len
(
gen_result
)
==
1
:
result
=
gen_result
[
0
]
tokens_used
=
0
else
:
result
=
""
tokens_used
=
0
else
:
else
:
result
=
gen_result
result
=
gen_result
tokens_used
=
0
tokens_used
=
0
return
result
return
result
,
tokens_used
# For now, estimate tokens (could be improved with actual token counting)
# For now, estimate tokens (could be improved with actual token counting)
estimated_tokens
=
len
(
result
.
split
())
+
len
(
prompt
.
split
())
estimated_tokens
=
len
(
result
.
split
())
+
len
(
prompt
.
split
())
return
result
,
estimated_tokens
return
result
,
estimated_tokens
...
@@ -320,7 +327,14 @@ def analyze_media(media_path, prompt, model_path, interval=10, job_id_int=None,
...
@@ -320,7 +327,14 @@ def analyze_media(media_path, prompt, model_path, interval=10, job_id_int=None,
messages
=
[{
"role"
:
"user"
,
"content"
:
[{
"type"
:
"text"
,
"text"
:
summary_prompt
}]}]
messages
=
[{
"role"
:
"user"
,
"content"
:
[{
"type"
:
"text"
,
"text"
:
summary_prompt
}]}]
gen_result
=
model
.
generate
({
"messages"
:
messages
},
max_new_tokens
=
256
)
gen_result
=
model
.
generate
({
"messages"
:
messages
},
max_new_tokens
=
256
)
if
isinstance
(
gen_result
,
tuple
):
if
isinstance
(
gen_result
,
tuple
):
summary
,
summary_tokens
=
gen_result
if
len
(
gen_result
)
>=
2
:
summary
,
summary_tokens
=
gen_result
[
0
],
gen_result
[
1
]
elif
len
(
gen_result
)
==
1
:
summary
=
gen_result
[
0
]
summary_tokens
=
0
else
:
summary
=
""
summary_tokens
=
0
else
:
else
:
summary
=
gen_result
summary
=
gen_result
summary_tokens
=
0
summary_tokens
=
0
...
@@ -328,7 +342,14 @@ def analyze_media(media_path, prompt, model_path, interval=10, job_id_int=None,
...
@@ -328,7 +342,14 @@ def analyze_media(media_path, prompt, model_path, interval=10, job_id_int=None,
# Use text-only model for summary
# Use text-only model for summary
gen_result
=
model
.
generate
(
f
"Summarize the video based on frame descriptions: {' '.join(descriptions)}"
,
max_new_tokens
=
256
)
gen_result
=
model
.
generate
(
f
"Summarize the video based on frame descriptions: {' '.join(descriptions)}"
,
max_new_tokens
=
256
)
if
isinstance
(
gen_result
,
tuple
):
if
isinstance
(
gen_result
,
tuple
):
summary
,
summary_tokens
=
gen_result
if
len
(
gen_result
)
>=
2
:
summary
,
summary_tokens
=
gen_result
[
0
],
gen_result
[
1
]
elif
len
(
gen_result
)
==
1
:
summary
=
gen_result
[
0
]
summary_tokens
=
0
else
:
summary
=
""
summary_tokens
=
0
else
:
else
:
summary
=
gen_result
summary
=
gen_result
summary_tokens
=
0
summary_tokens
=
0
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment