danielhanchen commited on
Commit
f5b2c61
·
verified ·
1 Parent(s): 6f74086

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,286 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - unsloth
4
+ base_model:
5
+ - Qwen/Qwen3-VL-235B-A22B-Thinking-FP8
6
+ license: apache-2.0
7
+ pipeline_tag: image-text-to-text
8
+ ---
9
+ > [!NOTE]
10
+ > Includes Unsloth **chat template fixes**! <br> For `llama.cpp`, use `--jinja`
11
+ >
12
+
13
+ <div>
14
+ <p style="margin-top: 0;margin-bottom: 0;">
15
+ <em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
16
+ </p>
17
+ <div style="display: flex; gap: 5px; align-items: center; ">
18
+ <a href="https://github.com/unslothai/unsloth/">
19
+ <img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
20
+ </a>
21
+ <a href="https://discord.gg/unsloth">
22
+ <img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
23
+ </a>
24
+ <a href="https://docs.unsloth.ai/">
25
+ <img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
26
+ </a>
27
+ </div>
28
+ </div>
29
+
30
+ <a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;">
31
+ <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
32
+ </a>
33
+
34
+ # Qwen3-VL-235B-A22B-Thinking-FP8
35
+
36
+ > This repository contains an FP8 quantized version of the [Qwen3-VL-235B-A22B-Thinking](https://huggingface.co/Qwen/Qwen3-VL-235B-A22B-Thinking) model. The quantization method is fine-grained fp8 quantization with block size of 128, and its performance metrics are nearly identical to those of the original BF16 model. Enjoy!
37
+
38
+
39
+ Meet Qwen3-VL — the most powerful vision-language model in the Qwen series to date.
40
+
41
+ This generation delivers comprehensive upgrades across the board: superior text understanding & generation, deeper visual perception & reasoning, extended context length, enhanced spatial and video dynamics comprehension, and stronger agent interaction capabilities.
42
+
43
+ Available in Dense and MoE architectures that scale from edge to cloud, with Instruct and reasoning‑enhanced Thinking editions for flexible, on‑demand deployment.
44
+
45
+
46
+ #### Key Enhancements:
47
+
48
+ * **Visual Agent**: Operates PC/mobile GUIs—recognizes elements, understands functions, invokes tools, completes tasks.
49
+
50
+ * **Visual Coding Boost**: Generates Draw.io/HTML/CSS/JS from images/videos.
51
+
52
+ * **Advanced Spatial Perception**: Judges object positions, viewpoints, and occlusions; provides stronger 2D grounding and enables 3D grounding for spatial reasoning and embodied AI.
53
+
54
+ * **Long Context & Video Understanding**: Native 256K context, expandable to 1M; handles books and hours-long video with full recall and second-level indexing.
55
+
56
+ * **Enhanced Multimodal Reasoning**: Excels in STEM/Math—causal analysis and logical, evidence-based answers.
57
+
58
+ * **Upgraded Visual Recognition**: Broader, higher-quality pretraining is able to “recognize everything”—celebrities, anime, products, landmarks, flora/fauna, etc.
59
+
60
+ * **Expanded OCR**: Supports 32 languages (up from 19); robust in low light, blur, and tilt; better with rare/ancient characters and jargon; improved long-document structure parsing.
61
+
62
+ * **Text Understanding on par with pure LLMs**: Seamless text–vision fusion for lossless, unified comprehension.
63
+
64
+
65
+ #### Model Architecture Updates:
66
+
67
+ <p align="center">
68
+ <img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-VL/qwen3vl_arc.jpg" width="80%"/>
69
+ <p>
70
+
71
+
72
+ 1. **Interleaved-MRoPE**: Full‑frequency allocation over time, width, and height via robust positional embeddings, enhancing long‑horizon video reasoning.
73
+
74
+ 2. **DeepStack**: Fuses multi‑level ViT features to capture fine‑grained details and sharpen image–text alignment.
75
+
76
+ 3. **Text–Timestamp Alignment:** Moves beyond T‑RoPE to precise, timestamp‑grounded event localization for stronger video temporal modeling.
77
+
78
+
79
+ This is the weight repository for the FP8 version of Qwen3-VL-235B-A22B-Thinking.
80
+
81
+
82
+ ---
83
+
84
+ ## Model Performance
85
+
86
+ **Multimodal performance**
87
+
88
+ ![](https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-VL/table_thinking_vl.jpg)
89
+
90
+ **Pure text performance**
91
+ ![](https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-VL/table_thinking_text.jpg)
92
+
93
+ ## Quickstart
94
+
95
+ Currently, 🤗 Transformers does not support loading these weights directly. Stay tuned!
96
+
97
+ We recommend deploying the model using vLLM or SGLang, with example launch commands provided below. For details on the runtime environment and deployment, please refer to this [link](https://github.com/QwenLM/Qwen3-VL?tab=readme-ov-file#deployment).
98
+
99
+ ### vLLM Inference
100
+
101
+ Here we provide a code snippet demonstrating how to use vLLM to run inference with Qwen3-VL locally. For more details on efficient deployment with vLLM, please refer to the [community deployment guide](https://docs.vllm.ai/projects/recipes/en/latest/Qwen/Qwen3-VL.html).
102
+
103
+ ```python
104
+ # -*- coding: utf-8 -*-
105
+ import torch
106
+ from qwen_vl_utils import process_vision_info
107
+ from transformers import AutoProcessor
108
+ from vllm import LLM, SamplingParams
109
+
110
+ import os
111
+ os.environ['VLLM_WORKER_MULTIPROC_METHOD'] = 'spawn'
112
+
113
+ def prepare_inputs_for_vllm(messages, processor):
114
+ text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
115
+ # qwen_vl_utils 0.0.14+ reqired
116
+ image_inputs, video_inputs, video_kwargs = process_vision_info(
117
+ messages,
118
+ image_patch_size=processor.image_processor.patch_size,
119
+ return_video_kwargs=True,
120
+ return_video_metadata=True
121
+ )
122
+ print(f"video_kwargs: {video_kwargs}")
123
+
124
+ mm_data = {}
125
+ if image_inputs is not None:
126
+ mm_data['image'] = image_inputs
127
+ if video_inputs is not None:
128
+ mm_data['video'] = video_inputs
129
+
130
+ return {
131
+ 'prompt': text,
132
+ 'multi_modal_data': mm_data,
133
+ 'mm_processor_kwargs': video_kwargs
134
+ }
135
+
136
+
137
+ if __name__ == '__main__':
138
+ # messages = [
139
+ # {
140
+ # "role": "user",
141
+ # "content": [
142
+ # {
143
+ # "type": "video",
144
+ # "video": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-VL/space_woaudio.mp4",
145
+ # },
146
+ # {"type": "text", "text": "这段视频有多长"},
147
+ # ],
148
+ # }
149
+ # ]
150
+
151
+ messages = [
152
+ {
153
+ "role": "user",
154
+ "content": [
155
+ {
156
+ "type": "image",
157
+ "image": "https://ofasys-multimodal-wlcb-3-toshanghai.oss-accelerate.aliyuncs.com/wpf272043/keepme/image/receipt.png",
158
+ },
159
+ {"type": "text", "text": "Read all the text in the image."},
160
+ ],
161
+ }
162
+ ]
163
+
164
+ # TODO: change to your own checkpoint path
165
+ checkpoint_path = "Qwen/Qwen3-VL-235B-A22B-Thinking-FP8"
166
+ processor = AutoProcessor.from_pretrained(checkpoint_path)
167
+ inputs = [prepare_inputs_for_vllm(message, processor) for message in [messages]]
168
+
169
+ llm = LLM(
170
+ model=checkpoint_path,
171
+ trust_remote_code=True,
172
+ gpu_memory_utilization=0.70,
173
+ enforce_eager=False,
174
+ tensor_parallel_size=torch.cuda.device_count(),
175
+ seed=0
176
+ )
177
+
178
+ sampling_params = SamplingParams(
179
+ temperature=0,
180
+ max_tokens=1024,
181
+ top_k=-1,
182
+ stop_token_ids=[],
183
+ )
184
+
185
+ for i, input_ in enumerate(inputs):
186
+ print()
187
+ print('=' * 40)
188
+ print(f"Inputs[{i}]: {input_['prompt']=!r}")
189
+ print('\n' + '>' * 40)
190
+
191
+ outputs = llm.generate(inputs, sampling_params=sampling_params)
192
+ for i, output in enumerate(outputs):
193
+ generated_text = output.outputs[0].text
194
+ print()
195
+ print('=' * 40)
196
+ print(f"Generated text: {generated_text!r}")
197
+ ```
198
+
199
+ ### SGLang Inference
200
+
201
+ Here we provide a code snippet demonstrating how to use SGLang to run inference with Qwen3-VL locally.
202
+
203
+ ```python
204
+ import time
205
+ from PIL import Image
206
+ from sglang import Engine
207
+ from qwen_vl_utils import process_vision_info
208
+ from transformers import AutoProcessor, AutoConfig
209
+
210
+ if __name__ == "__main__":
211
+ # TODO: change to your own checkpoint path
212
+ checkpoint_path = "Qwen/Qwen3-VL-235B-A22B-Thinking-FP8"
213
+ processor = AutoProcessor.from_pretrained(checkpoint_path)
214
+
215
+ messages = [
216
+ {
217
+ "role": "user",
218
+ "content": [
219
+ {
220
+ "type": "image",
221
+ "image": "https://ofasys-multimodal-wlcb-3-toshanghai.oss-accelerate.aliyuncs.com/wpf272043/keepme/image/receipt.png",
222
+ },
223
+ {"type": "text", "text": "Read all the text in the image."},
224
+ ],
225
+ }
226
+ ]
227
+
228
+ text = processor.apply_chat_template(
229
+ messages,
230
+ tokenize=False,
231
+ add_generation_prompt=True
232
+ )
233
+
234
+ image_inputs, _ = process_vision_info(messages, image_patch_size=processor.image_processor.patch_size)
235
+
236
+ llm = Engine(
237
+ model_path=checkpoint_path,
238
+ enable_multimodal=True,
239
+ mem_fraction_static=0.8,
240
+ tp_size=torch.cuda.device_count(),
241
+ attention_backend="fa3"
242
+ )
243
+
244
+ start = time.time()
245
+ sampling_params = {"max_new_tokens": 1024}
246
+ response = llm.generate(prompt=text, image_data=image_inputs, sampling_params=sampling_params)
247
+ print(f"Response costs: {time.time() - start:.2f}s")
248
+ print(f"Generated text: {response['text']}")
249
+ ```
250
+
251
+ ## Citation
252
+
253
+ If you find our work helpful, feel free to give us a cite.
254
+
255
+ ```
256
+ @misc{qwen3technicalreport,
257
+ title={Qwen3 Technical Report},
258
+ author={Qwen Team},
259
+ year={2025},
260
+ eprint={2505.09388},
261
+ archivePrefix={arXiv},
262
+ primaryClass={cs.CL},
263
+ url={https://arxiv.org/abs/2505.09388},
264
+ }
265
+
266
+ @article{Qwen2.5-VL,
267
+ title={Qwen2.5-VL Technical Report},
268
+ author={Bai, Shuai and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Song, Sibo and Dang, Kai and Wang, Peng and Wang, Shijie and Tang, Jun and Zhong, Humen and Zhu, Yuanzhi and Yang, Mingkun and Li, Zhaohai and Wan, Jianqiang and Wang, Pengfei and Ding, Wei and Fu, Zheren and Xu, Yiheng and Ye, Jiabo and Zhang, Xi and Xie, Tianbao and Cheng, Zesen and Zhang, Hang and Yang, Zhibo and Xu, Haiyang and Lin, Junyang},
269
+ journal={arXiv preprint arXiv:2502.13923},
270
+ year={2025}
271
+ }
272
+
273
+ @article{Qwen2VL,
274
+ title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution},
275
+ author={Wang, Peng and Bai, Shuai and Tan, Sinan and Wang, Shijie and Fan, Zhihao and Bai, Jinze and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Fan, Yang and Dang, Kai and Du, Mengfei and Ren, Xuancheng and Men, Rui and Liu, Dayiheng and Zhou, Chang and Zhou, Jingren and Lin, Junyang},
276
+ journal={arXiv preprint arXiv:2409.12191},
277
+ year={2024}
278
+ }
279
+
280
+ @article{Qwen-VL,
281
+ title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond},
282
+ author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren},
283
+ journal={arXiv preprint arXiv:2308.12966},
284
+ year={2023}
285
+ }
286
+ ```
added_tokens.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</think>": 151668,
3
+ "</tool_call>": 151658,
4
+ "</tool_response>": 151666,
5
+ "<think>": 151667,
6
+ "<tool_call>": 151657,
7
+ "<tool_response>": 151665,
8
+ "<|box_end|>": 151649,
9
+ "<|box_start|>": 151648,
10
+ "<|endoftext|>": 151643,
11
+ "<|file_sep|>": 151664,
12
+ "<|fim_middle|>": 151660,
13
+ "<|fim_pad|>": 151662,
14
+ "<|fim_prefix|>": 151659,
15
+ "<|fim_suffix|>": 151661,
16
+ "<|im_end|>": 151645,
17
+ "<|im_start|>": 151644,
18
+ "<|image_pad|>": 151655,
19
+ "<|object_ref_end|>": 151647,
20
+ "<|object_ref_start|>": 151646,
21
+ "<|quad_end|>": 151651,
22
+ "<|quad_start|>": 151650,
23
+ "<|repo_name|>": 151663,
24
+ "<|video_pad|>": 151656,
25
+ "<|vision_end|>": 151653,
26
+ "<|vision_pad|>": 151654,
27
+ "<|vision_start|>": 151652
28
+ }
chat_template.jinja ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- set image_count = namespace(value=0) %}
2
+ {%- set video_count = namespace(value=0) %}
3
+ {%- macro render_content(content, do_vision_count) %}
4
+ {%- if content is string %}
5
+ {{- content }}
6
+ {%- else %}
7
+ {%- for item in content %}
8
+ {%- if 'image' in item or 'image_url' in item or item.type == 'image' %}
9
+ {%- if do_vision_count %}
10
+ {%- set image_count.value = image_count.value + 1 %}
11
+ {%- endif %}
12
+ {%- if add_vision_id %}Picture {{ image_count.value }}: {% endif -%}
13
+ <|vision_start|><|image_pad|><|vision_end|>
14
+ {%- elif 'video' in item or item.type == 'video' %}
15
+ {%- if do_vision_count %}
16
+ {%- set video_count.value = video_count.value + 1 %}
17
+ {%- endif %}
18
+ {%- if add_vision_id %}Video {{ video_count.value }}: {% endif -%}
19
+ <|vision_start|><|video_pad|><|vision_end|>
20
+ {%- elif 'text' in item %}
21
+ {{- item.text }}
22
+ {%- endif %}
23
+ {%- endfor %}
24
+ {%- endif %}
25
+ {%- endmacro %}
26
+ {%- if tools %}
27
+ {{- '<|im_start|>system\n' }}
28
+ {%- if messages[0].role == 'system' %}
29
+ {{- render_content(messages[0].content, false) + '\n\n' }}
30
+ {%- endif %}
31
+ {{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
32
+ {%- for tool in tools %}
33
+ {{- "\n" }}
34
+ {{- tool | tojson }}
35
+ {%- endfor %}
36
+ {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
37
+ {%- else %}
38
+ {%- if messages[0].role == 'system' %}
39
+ {{- '<|im_start|>system\n' + render_content(messages[0].content, false) + '<|im_end|>\n' }}
40
+ {%- endif %}
41
+ {%- endif %}
42
+ {%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
43
+ {%- for message in messages[::-1] %}
44
+ {%- set index = (messages|length - 1) - loop.index0 %}
45
+ {%- if ns.multi_step_tool and message.role == "user" %}
46
+ {%- set content = render_content(message.content, false) %}
47
+ {%- if not(content.startswith('<tool_response>') and content.endswith('</tool_response>')) %}
48
+ {%- set ns.multi_step_tool = false %}
49
+ {%- set ns.last_query_index = index %}
50
+ {%- endif %}
51
+ {%- endif %}
52
+ {%- endfor %}
53
+ {%- for message in messages %}
54
+ {%- set content = render_content(message.content, True) %}
55
+ {%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
56
+ {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
57
+ {%- elif message.role == "assistant" %}
58
+ {%- set reasoning_content = '' %}
59
+ {%- if message.reasoning_content is string %}
60
+ {%- set reasoning_content = message.reasoning_content %}
61
+ {%- else %}
62
+ {%- if '</think>' in content %}
63
+ {%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
64
+ {%- set content = content.split('</think>')[-1].lstrip('\n') %}
65
+ {%- endif %}
66
+ {%- endif %}
67
+ {%- if loop.index0 > ns.last_query_index %}
68
+ {%- if loop.last or (not loop.last and reasoning_content) %}
69
+ {{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }}
70
+ {%- else %}
71
+ {{- '<|im_start|>' + message.role + '\n' + content }}
72
+ {%- endif %}
73
+ {%- else %}
74
+ {{- '<|im_start|>' + message.role + '\n' + content }}
75
+ {%- endif %}
76
+ {%- if message.tool_calls %}
77
+ {%- for tool_call in message.tool_calls %}
78
+ {%- if (loop.first and content) or (not loop.first) %}
79
+ {{- '\n' }}
80
+ {%- endif %}
81
+ {%- if tool_call.function %}
82
+ {%- set tool_call = tool_call.function %}
83
+ {%- endif %}
84
+ {{- '<tool_call>\n{"name": "' }}
85
+ {{- tool_call.name }}
86
+ {{- '", "arguments": ' }}
87
+ {%- if tool_call.arguments is string %}
88
+ {{- tool_call.arguments }}
89
+ {%- else %}
90
+ {{- tool_call.arguments | tojson }}
91
+ {%- endif %}
92
+ {{- '}\n</tool_call>' }}
93
+ {%- endfor %}
94
+ {%- endif %}
95
+ {{- '<|im_end|>\n' }}
96
+ {%- elif message.role == "tool" %}
97
+ {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
98
+ {{- '<|im_start|>user' }}
99
+ {%- endif %}
100
+ {{- '\n<tool_response>\n' }}
101
+ {{- content }}
102
+ {{- '\n</tool_response>' }}
103
+ {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
104
+ {{- '<|im_end|>\n' }}
105
+ {%- endif %}
106
+ {%- endif %}
107
+ {%- endfor %}
108
+ {%- if add_generation_prompt %}
109
+ {{- '<|im_start|>assistant\n<think>\n' }}
110
+ {%- endif %}
chat_template.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "chat_template": "{%- set image_count = namespace(value=0) %}\n{%- set video_count = namespace(value=0) %}\n{%- macro render_content(content, do_vision_count) %}\n {%- if content is string %}\n {{- content }}\n {%- else %}\n {%- for item in content %}\n {%- if 'image' in item or 'image_url' in item or item.type == 'image' %}\n {%- if do_vision_count %}\n {%- set image_count.value = image_count.value + 1 %}\n {%- endif %}\n {%- if add_vision_id %}Picture {{ image_count.value }}: {% endif -%}\n <|vision_start|><|image_pad|><|vision_end|>\n {%- elif 'video' in item or item.type == 'video' %}\n {%- if do_vision_count %}\n {%- set video_count.value = video_count.value + 1 %}\n {%- endif %}\n {%- if add_vision_id %}Video {{ video_count.value }}: {% endif -%}\n <|vision_start|><|video_pad|><|vision_end|>\n {%- elif 'text' in item %}\n {{- item.text }}\n {%- endif %}\n {%- endfor %}\n {%- endif %}\n{%- endmacro %}\n{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0].role == 'system' %}\n {{- render_content(messages[0].content, false) + '\\n\\n' }}\n {%- endif %}\n {{- \"# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0].role == 'system' %}\n {{- '<|im_start|>system\\n' + render_content(messages[0].content, false) + '<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}\n{%- for message in messages[::-1] %}\n {%- set index = (messages|length - 1) - loop.index0 %}\n {%- if ns.multi_step_tool and message.role == \"user\" %}\n {%- set content = render_content(message.content, false) %}\n {%- if not(content.startswith('<tool_response>') and content.endswith('</tool_response>')) %}\n {%- set ns.multi_step_tool = false %}\n {%- set ns.last_query_index = index %}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- for message in messages %}\n {%- set content = render_content(message.content, True) %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) %}\n {{- '<|im_start|>' + message.role + '\\n' + content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {%- set reasoning_content = '' %}\n {%- if message.reasoning_content is string %}\n {%- set reasoning_content = message.reasoning_content %}\n {%- else %}\n {%- if '</think>' in content %}\n {%- set reasoning_content = content.split('</think>')[0].rstrip('\\n').split('<think>')[-1].lstrip('\\n') %}\n {%- set content = content.split('</think>')[-1].lstrip('\\n') %}\n {%- endif %}\n {%- endif %}\n {%- if loop.index0 > ns.last_query_index %}\n {%- if loop.last or (not loop.last and reasoning_content) %}\n {{- '<|im_start|>' + message.role + '\\n<think>\\n' + reasoning_content.strip('\\n') + '\\n</think>\\n\\n' + content.lstrip('\\n') }}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- endif %}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- endif %}\n {%- if message.tool_calls %}\n {%- for tool_call in message.tool_calls %}\n {%- if (loop.first and content) or (not loop.first) %}\n {{- '\\n' }}\n {%- endif %}\n {%- if tool_call.function %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {%- if tool_call.arguments is string %}\n {{- tool_call.arguments }}\n {%- else %}\n {{- tool_call.arguments | tojson }}\n {%- endif %}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {%- endif %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if loop.first or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n<think>\\n' }}\n{%- endif %}\n"
3
+ }
config.json ADDED
@@ -0,0 +1,421 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Qwen3VLMoeForConditionalGeneration"
4
+ ],
5
+ "image_token_id": 151655,
6
+ "model_type": "qwen3_vl_moe",
7
+ "pad_token_id": 151654,
8
+ "quantization_config": {
9
+ "activation_scheme": "dynamic",
10
+ "fmt": "e4m3",
11
+ "ignored_layers": [
12
+ "lm_head",
13
+ "model.visual.merger.linear_fc1",
14
+ "model.visual.merger.linear_fc2",
15
+ "model.visual.merger.norm",
16
+ "model.visual.patch_embed.proj",
17
+ "model.visual.pos_embed",
18
+ "visual.merger.linear_fc1",
19
+ "visual.merger.linear_fc2",
20
+ "visual.merger.norm",
21
+ "visual.patch_embed.proj",
22
+ "visual.pos_embed",
23
+ "model.visual.blocks.0.attn.proj",
24
+ "model.visual.blocks.0.attn.qkv",
25
+ "model.visual.blocks.0.mlp.linear_fc1",
26
+ "model.visual.blocks.0.mlp.linear_fc2",
27
+ "visual.blocks.0.attn.proj",
28
+ "visual.blocks.0.attn.qkv_proj",
29
+ "visual.blocks.0.mlp.linear_fc1",
30
+ "visual.blocks.0.mlp.linear_fc2",
31
+ "model.visual.blocks.1.attn.proj",
32
+ "model.visual.blocks.1.attn.qkv",
33
+ "model.visual.blocks.1.mlp.linear_fc1",
34
+ "model.visual.blocks.1.mlp.linear_fc2",
35
+ "visual.blocks.1.attn.proj",
36
+ "visual.blocks.1.attn.qkv_proj",
37
+ "visual.blocks.1.mlp.linear_fc1",
38
+ "visual.blocks.1.mlp.linear_fc2",
39
+ "model.visual.blocks.2.attn.proj",
40
+ "model.visual.blocks.2.attn.qkv",
41
+ "model.visual.blocks.2.mlp.linear_fc1",
42
+ "model.visual.blocks.2.mlp.linear_fc2",
43
+ "visual.blocks.2.attn.proj",
44
+ "visual.blocks.2.attn.qkv_proj",
45
+ "visual.blocks.2.mlp.linear_fc1",
46
+ "visual.blocks.2.mlp.linear_fc2",
47
+ "model.visual.blocks.3.attn.proj",
48
+ "model.visual.blocks.3.attn.qkv",
49
+ "model.visual.blocks.3.mlp.linear_fc1",
50
+ "model.visual.blocks.3.mlp.linear_fc2",
51
+ "visual.blocks.3.attn.proj",
52
+ "visual.blocks.3.attn.qkv_proj",
53
+ "visual.blocks.3.mlp.linear_fc1",
54
+ "visual.blocks.3.mlp.linear_fc2",
55
+ "model.visual.blocks.4.attn.proj",
56
+ "model.visual.blocks.4.attn.qkv",
57
+ "model.visual.blocks.4.mlp.linear_fc1",
58
+ "model.visual.blocks.4.mlp.linear_fc2",
59
+ "visual.blocks.4.attn.proj",
60
+ "visual.blocks.4.attn.qkv_proj",
61
+ "visual.blocks.4.mlp.linear_fc1",
62
+ "visual.blocks.4.mlp.linear_fc2",
63
+ "model.visual.blocks.5.attn.proj",
64
+ "model.visual.blocks.5.attn.qkv",
65
+ "model.visual.blocks.5.mlp.linear_fc1",
66
+ "model.visual.blocks.5.mlp.linear_fc2",
67
+ "visual.blocks.5.attn.proj",
68
+ "visual.blocks.5.attn.qkv_proj",
69
+ "visual.blocks.5.mlp.linear_fc1",
70
+ "visual.blocks.5.mlp.linear_fc2",
71
+ "model.visual.blocks.6.attn.proj",
72
+ "model.visual.blocks.6.attn.qkv",
73
+ "model.visual.blocks.6.mlp.linear_fc1",
74
+ "model.visual.blocks.6.mlp.linear_fc2",
75
+ "visual.blocks.6.attn.proj",
76
+ "visual.blocks.6.attn.qkv_proj",
77
+ "visual.blocks.6.mlp.linear_fc1",
78
+ "visual.blocks.6.mlp.linear_fc2",
79
+ "model.visual.blocks.7.attn.proj",
80
+ "model.visual.blocks.7.attn.qkv",
81
+ "model.visual.blocks.7.mlp.linear_fc1",
82
+ "model.visual.blocks.7.mlp.linear_fc2",
83
+ "visual.blocks.7.attn.proj",
84
+ "visual.blocks.7.attn.qkv_proj",
85
+ "visual.blocks.7.mlp.linear_fc1",
86
+ "visual.blocks.7.mlp.linear_fc2",
87
+ "model.visual.blocks.8.attn.proj",
88
+ "model.visual.blocks.8.attn.qkv",
89
+ "model.visual.blocks.8.mlp.linear_fc1",
90
+ "model.visual.blocks.8.mlp.linear_fc2",
91
+ "visual.blocks.8.attn.proj",
92
+ "visual.blocks.8.attn.qkv_proj",
93
+ "visual.blocks.8.mlp.linear_fc1",
94
+ "visual.blocks.8.mlp.linear_fc2",
95
+ "model.visual.blocks.9.attn.proj",
96
+ "model.visual.blocks.9.attn.qkv",
97
+ "model.visual.blocks.9.mlp.linear_fc1",
98
+ "model.visual.blocks.9.mlp.linear_fc2",
99
+ "visual.blocks.9.attn.proj",
100
+ "visual.blocks.9.attn.qkv_proj",
101
+ "visual.blocks.9.mlp.linear_fc1",
102
+ "visual.blocks.9.mlp.linear_fc2",
103
+ "model.visual.blocks.10.attn.proj",
104
+ "model.visual.blocks.10.attn.qkv",
105
+ "model.visual.blocks.10.mlp.linear_fc1",
106
+ "model.visual.blocks.10.mlp.linear_fc2",
107
+ "visual.blocks.10.attn.proj",
108
+ "visual.blocks.10.attn.qkv_proj",
109
+ "visual.blocks.10.mlp.linear_fc1",
110
+ "visual.blocks.10.mlp.linear_fc2",
111
+ "model.visual.blocks.11.attn.proj",
112
+ "model.visual.blocks.11.attn.qkv",
113
+ "model.visual.blocks.11.mlp.linear_fc1",
114
+ "model.visual.blocks.11.mlp.linear_fc2",
115
+ "visual.blocks.11.attn.proj",
116
+ "visual.blocks.11.attn.qkv_proj",
117
+ "visual.blocks.11.mlp.linear_fc1",
118
+ "visual.blocks.11.mlp.linear_fc2",
119
+ "model.visual.blocks.12.attn.proj",
120
+ "model.visual.blocks.12.attn.qkv",
121
+ "model.visual.blocks.12.mlp.linear_fc1",
122
+ "model.visual.blocks.12.mlp.linear_fc2",
123
+ "visual.blocks.12.attn.proj",
124
+ "visual.blocks.12.attn.qkv_proj",
125
+ "visual.blocks.12.mlp.linear_fc1",
126
+ "visual.blocks.12.mlp.linear_fc2",
127
+ "model.visual.blocks.13.attn.proj",
128
+ "model.visual.blocks.13.attn.qkv",
129
+ "model.visual.blocks.13.mlp.linear_fc1",
130
+ "model.visual.blocks.13.mlp.linear_fc2",
131
+ "visual.blocks.13.attn.proj",
132
+ "visual.blocks.13.attn.qkv_proj",
133
+ "visual.blocks.13.mlp.linear_fc1",
134
+ "visual.blocks.13.mlp.linear_fc2",
135
+ "model.visual.blocks.14.attn.proj",
136
+ "model.visual.blocks.14.attn.qkv",
137
+ "model.visual.blocks.14.mlp.linear_fc1",
138
+ "model.visual.blocks.14.mlp.linear_fc2",
139
+ "visual.blocks.14.attn.proj",
140
+ "visual.blocks.14.attn.qkv_proj",
141
+ "visual.blocks.14.mlp.linear_fc1",
142
+ "visual.blocks.14.mlp.linear_fc2",
143
+ "model.visual.blocks.15.attn.proj",
144
+ "model.visual.blocks.15.attn.qkv",
145
+ "model.visual.blocks.15.mlp.linear_fc1",
146
+ "model.visual.blocks.15.mlp.linear_fc2",
147
+ "visual.blocks.15.attn.proj",
148
+ "visual.blocks.15.attn.qkv_proj",
149
+ "visual.blocks.15.mlp.linear_fc1",
150
+ "visual.blocks.15.mlp.linear_fc2",
151
+ "model.visual.blocks.16.attn.proj",
152
+ "model.visual.blocks.16.attn.qkv",
153
+ "model.visual.blocks.16.mlp.linear_fc1",
154
+ "model.visual.blocks.16.mlp.linear_fc2",
155
+ "visual.blocks.16.attn.proj",
156
+ "visual.blocks.16.attn.qkv_proj",
157
+ "visual.blocks.16.mlp.linear_fc1",
158
+ "visual.blocks.16.mlp.linear_fc2",
159
+ "model.visual.blocks.17.attn.proj",
160
+ "model.visual.blocks.17.attn.qkv",
161
+ "model.visual.blocks.17.mlp.linear_fc1",
162
+ "model.visual.blocks.17.mlp.linear_fc2",
163
+ "visual.blocks.17.attn.proj",
164
+ "visual.blocks.17.attn.qkv_proj",
165
+ "visual.blocks.17.mlp.linear_fc1",
166
+ "visual.blocks.17.mlp.linear_fc2",
167
+ "model.visual.blocks.18.attn.proj",
168
+ "model.visual.blocks.18.attn.qkv",
169
+ "model.visual.blocks.18.mlp.linear_fc1",
170
+ "model.visual.blocks.18.mlp.linear_fc2",
171
+ "visual.blocks.18.attn.proj",
172
+ "visual.blocks.18.attn.qkv_proj",
173
+ "visual.blocks.18.mlp.linear_fc1",
174
+ "visual.blocks.18.mlp.linear_fc2",
175
+ "model.visual.blocks.19.attn.proj",
176
+ "model.visual.blocks.19.attn.qkv",
177
+ "model.visual.blocks.19.mlp.linear_fc1",
178
+ "model.visual.blocks.19.mlp.linear_fc2",
179
+ "visual.blocks.19.attn.proj",
180
+ "visual.blocks.19.attn.qkv_proj",
181
+ "visual.blocks.19.mlp.linear_fc1",
182
+ "visual.blocks.19.mlp.linear_fc2",
183
+ "model.visual.blocks.20.attn.proj",
184
+ "model.visual.blocks.20.attn.qkv",
185
+ "model.visual.blocks.20.mlp.linear_fc1",
186
+ "model.visual.blocks.20.mlp.linear_fc2",
187
+ "visual.blocks.20.attn.proj",
188
+ "visual.blocks.20.attn.qkv_proj",
189
+ "visual.blocks.20.mlp.linear_fc1",
190
+ "visual.blocks.20.mlp.linear_fc2",
191
+ "model.visual.blocks.21.attn.proj",
192
+ "model.visual.blocks.21.attn.qkv",
193
+ "model.visual.blocks.21.mlp.linear_fc1",
194
+ "model.visual.blocks.21.mlp.linear_fc2",
195
+ "visual.blocks.21.attn.proj",
196
+ "visual.blocks.21.attn.qkv_proj",
197
+ "visual.blocks.21.mlp.linear_fc1",
198
+ "visual.blocks.21.mlp.linear_fc2",
199
+ "model.visual.blocks.22.attn.proj",
200
+ "model.visual.blocks.22.attn.qkv",
201
+ "model.visual.blocks.22.mlp.linear_fc1",
202
+ "model.visual.blocks.22.mlp.linear_fc2",
203
+ "visual.blocks.22.attn.proj",
204
+ "visual.blocks.22.attn.qkv_proj",
205
+ "visual.blocks.22.mlp.linear_fc1",
206
+ "visual.blocks.22.mlp.linear_fc2",
207
+ "model.visual.blocks.23.attn.proj",
208
+ "model.visual.blocks.23.attn.qkv",
209
+ "model.visual.blocks.23.mlp.linear_fc1",
210
+ "model.visual.blocks.23.mlp.linear_fc2",
211
+ "visual.blocks.23.attn.proj",
212
+ "visual.blocks.23.attn.qkv_proj",
213
+ "visual.blocks.23.mlp.linear_fc1",
214
+ "visual.blocks.23.mlp.linear_fc2",
215
+ "model.visual.blocks.24.attn.proj",
216
+ "model.visual.blocks.24.attn.qkv",
217
+ "model.visual.blocks.24.mlp.linear_fc1",
218
+ "model.visual.blocks.24.mlp.linear_fc2",
219
+ "visual.blocks.24.attn.proj",
220
+ "visual.blocks.24.attn.qkv_proj",
221
+ "visual.blocks.24.mlp.linear_fc1",
222
+ "visual.blocks.24.mlp.linear_fc2",
223
+ "model.visual.blocks.25.attn.proj",
224
+ "model.visual.blocks.25.attn.qkv",
225
+ "model.visual.blocks.25.mlp.linear_fc1",
226
+ "model.visual.blocks.25.mlp.linear_fc2",
227
+ "visual.blocks.25.attn.proj",
228
+ "visual.blocks.25.attn.qkv_proj",
229
+ "visual.blocks.25.mlp.linear_fc1",
230
+ "visual.blocks.25.mlp.linear_fc2",
231
+ "model.visual.blocks.26.attn.proj",
232
+ "model.visual.blocks.26.attn.qkv",
233
+ "model.visual.blocks.26.mlp.linear_fc1",
234
+ "model.visual.blocks.26.mlp.linear_fc2",
235
+ "visual.blocks.26.attn.proj",
236
+ "visual.blocks.26.attn.qkv_proj",
237
+ "visual.blocks.26.mlp.linear_fc1",
238
+ "visual.blocks.26.mlp.linear_fc2",
239
+ "model.visual.deepstack_merger_list.0.linear_fc1",
240
+ "model.visual.deepstack_merger_list.0.linear_fc2",
241
+ "model.visual.deepstack_merger_list.0.norm",
242
+ "visual.deepstack_merger_list.0.linear_fc1",
243
+ "visual.deepstack_merger_list.0.linear_fc2",
244
+ "visual.deepstack_merger_list.0.norm",
245
+ "model.visual.deepstack_merger_list.1.linear_fc1",
246
+ "model.visual.deepstack_merger_list.1.linear_fc2",
247
+ "model.visual.deepstack_merger_list.1.norm",
248
+ "visual.deepstack_merger_list.1.linear_fc1",
249
+ "visual.deepstack_merger_list.1.linear_fc2",
250
+ "visual.deepstack_merger_list.1.norm",
251
+ "model.visual.deepstack_merger_list.2.linear_fc1",
252
+ "model.visual.deepstack_merger_list.2.linear_fc2",
253
+ "model.visual.deepstack_merger_list.2.norm",
254
+ "visual.deepstack_merger_list.2.linear_fc1",
255
+ "visual.deepstack_merger_list.2.linear_fc2",
256
+ "visual.deepstack_merger_list.2.norm",
257
+ "model.language_model.layers.0.mlp.gate",
258
+ "model.language_model.layers.1.mlp.gate",
259
+ "model.language_model.layers.2.mlp.gate",
260
+ "model.language_model.layers.3.mlp.gate",
261
+ "model.language_model.layers.4.mlp.gate",
262
+ "model.language_model.layers.5.mlp.gate",
263
+ "model.language_model.layers.6.mlp.gate",
264
+ "model.language_model.layers.7.mlp.gate",
265
+ "model.language_model.layers.8.mlp.gate",
266
+ "model.language_model.layers.9.mlp.gate",
267
+ "model.language_model.layers.10.mlp.gate",
268
+ "model.language_model.layers.11.mlp.gate",
269
+ "model.language_model.layers.12.mlp.gate",
270
+ "model.language_model.layers.13.mlp.gate",
271
+ "model.language_model.layers.14.mlp.gate",
272
+ "model.language_model.layers.15.mlp.gate",
273
+ "model.language_model.layers.16.mlp.gate",
274
+ "model.language_model.layers.17.mlp.gate",
275
+ "model.language_model.layers.18.mlp.gate",
276
+ "model.language_model.layers.19.mlp.gate",
277
+ "model.language_model.layers.20.mlp.gate",
278
+ "model.language_model.layers.21.mlp.gate",
279
+ "model.language_model.layers.22.mlp.gate",
280
+ "model.language_model.layers.23.mlp.gate",
281
+ "model.language_model.layers.24.mlp.gate",
282
+ "model.language_model.layers.25.mlp.gate",
283
+ "model.language_model.layers.26.mlp.gate",
284
+ "model.language_model.layers.27.mlp.gate",
285
+ "model.language_model.layers.28.mlp.gate",
286
+ "model.language_model.layers.29.mlp.gate",
287
+ "model.language_model.layers.30.mlp.gate",
288
+ "model.language_model.layers.31.mlp.gate",
289
+ "model.language_model.layers.32.mlp.gate",
290
+ "model.language_model.layers.33.mlp.gate",
291
+ "model.language_model.layers.34.mlp.gate",
292
+ "model.language_model.layers.35.mlp.gate",
293
+ "model.language_model.layers.36.mlp.gate",
294
+ "model.language_model.layers.37.mlp.gate",
295
+ "model.language_model.layers.38.mlp.gate",
296
+ "model.language_model.layers.39.mlp.gate",
297
+ "model.language_model.layers.40.mlp.gate",
298
+ "model.language_model.layers.41.mlp.gate",
299
+ "model.language_model.layers.42.mlp.gate",
300
+ "model.language_model.layers.43.mlp.gate",
301
+ "model.language_model.layers.44.mlp.gate",
302
+ "model.language_model.layers.45.mlp.gate",
303
+ "model.language_model.layers.46.mlp.gate",
304
+ "model.language_model.layers.47.mlp.gate",
305
+ "model.language_model.layers.48.mlp.gate",
306
+ "model.language_model.layers.49.mlp.gate",
307
+ "model.language_model.layers.50.mlp.gate",
308
+ "model.language_model.layers.51.mlp.gate",
309
+ "model.language_model.layers.52.mlp.gate",
310
+ "model.language_model.layers.53.mlp.gate",
311
+ "model.language_model.layers.54.mlp.gate",
312
+ "model.language_model.layers.55.mlp.gate",
313
+ "model.language_model.layers.56.mlp.gate",
314
+ "model.language_model.layers.57.mlp.gate",
315
+ "model.language_model.layers.58.mlp.gate",
316
+ "model.language_model.layers.59.mlp.gate",
317
+ "model.language_model.layers.60.mlp.gate",
318
+ "model.language_model.layers.61.mlp.gate",
319
+ "model.language_model.layers.62.mlp.gate",
320
+ "model.language_model.layers.63.mlp.gate",
321
+ "model.language_model.layers.64.mlp.gate",
322
+ "model.language_model.layers.65.mlp.gate",
323
+ "model.language_model.layers.66.mlp.gate",
324
+ "model.language_model.layers.67.mlp.gate",
325
+ "model.language_model.layers.68.mlp.gate",
326
+ "model.language_model.layers.69.mlp.gate",
327
+ "model.language_model.layers.70.mlp.gate",
328
+ "model.language_model.layers.71.mlp.gate",
329
+ "model.language_model.layers.72.mlp.gate",
330
+ "model.language_model.layers.73.mlp.gate",
331
+ "model.language_model.layers.74.mlp.gate",
332
+ "model.language_model.layers.75.mlp.gate",
333
+ "model.language_model.layers.76.mlp.gate",
334
+ "model.language_model.layers.77.mlp.gate",
335
+ "model.language_model.layers.78.mlp.gate",
336
+ "model.language_model.layers.79.mlp.gate",
337
+ "model.language_model.layers.80.mlp.gate",
338
+ "model.language_model.layers.81.mlp.gate",
339
+ "model.language_model.layers.82.mlp.gate",
340
+ "model.language_model.layers.83.mlp.gate",
341
+ "model.language_model.layers.84.mlp.gate",
342
+ "model.language_model.layers.85.mlp.gate",
343
+ "model.language_model.layers.86.mlp.gate",
344
+ "model.language_model.layers.87.mlp.gate",
345
+ "model.language_model.layers.88.mlp.gate",
346
+ "model.language_model.layers.89.mlp.gate",
347
+ "model.language_model.layers.90.mlp.gate",
348
+ "model.language_model.layers.91.mlp.gate",
349
+ "model.language_model.layers.92.mlp.gate",
350
+ "model.language_model.layers.93.mlp.gate"
351
+ ],
352
+ "quant_method": "fp8",
353
+ "weight_block_size": [
354
+ 128,
355
+ 128
356
+ ]
357
+ },
358
+ "text_config": {
359
+ "attention_bias": false,
360
+ "attention_dropout": 0.0,
361
+ "bos_token_id": 151643,
362
+ "decoder_sparse_step": 1,
363
+ "torch_dtype": "bfloat16",
364
+ "eos_token_id": 151645,
365
+ "head_dim": 128,
366
+ "hidden_act": "silu",
367
+ "hidden_size": 4096,
368
+ "initializer_range": 0.02,
369
+ "intermediate_size": 12288,
370
+ "max_position_embeddings": 262144,
371
+ "mlp_only_layers": [],
372
+ "model_type": "qwen3_vl_moe_text",
373
+ "moe_intermediate_size": 1536,
374
+ "norm_topk_prob": true,
375
+ "num_attention_heads": 64,
376
+ "num_experts": 128,
377
+ "num_experts_per_tok": 8,
378
+ "num_hidden_layers": 94,
379
+ "num_key_value_heads": 4,
380
+ "rms_norm_eps": 1e-06,
381
+ "rope_scaling": {
382
+ "mrope_interleaved": true,
383
+ "mrope_section": [
384
+ 24,
385
+ 20,
386
+ 20
387
+ ],
388
+ "rope_type": "default"
389
+ },
390
+ "rope_theta": 5000000,
391
+ "router_aux_loss_coef": 0.001,
392
+ "use_cache": true,
393
+ "vocab_size": 151936
394
+ },
395
+ "tie_word_embeddings": false,
396
+ "transformers_version": "4.57.1",
397
+ "unsloth_fixed": true,
398
+ "video_token_id": 151656,
399
+ "vision_config": {
400
+ "deepstack_visual_indexes": [
401
+ 8,
402
+ 16,
403
+ 24
404
+ ],
405
+ "depth": 27,
406
+ "hidden_act": "gelu_pytorch_tanh",
407
+ "hidden_size": 1152,
408
+ "in_channels": 3,
409
+ "initializer_range": 0.02,
410
+ "intermediate_size": 4304,
411
+ "model_type": "qwen3_vl_moe",
412
+ "num_heads": 16,
413
+ "num_position_embeddings": 2304,
414
+ "out_hidden_size": 4096,
415
+ "patch_size": 16,
416
+ "spatial_merge_size": 2,
417
+ "temporal_patch_size": 2
418
+ },
419
+ "vision_end_token_id": 151653,
420
+ "vision_start_token_id": 151652
421
+ }
generation_config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "pad_token_id": 151643,
4
+ "do_sample": true,
5
+ "eos_token_id": [
6
+ 151645,
7
+ 151643
8
+ ],
9
+ "top_k": 20,
10
+ "top_p": 0.95,
11
+ "repetition_penalty": 1.0,
12
+ "temperature": 0.8,
13
+ "transformers_version": "4.56.0"
14
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model-00001-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef2645131a322a5caea03366dc23f4714c2eb4157dc307b0e3c4b86298e4bd81
3
+ size 9956019648
model-00002-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:677c928ad1093c446f63d43d20327f9118f8c98b67b62eea9f574988522aef56
3
+ size 9955588840
model-00003-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:581ac4dcc588a27debc2e5bc01727a14ac9350d9dd01c28c0e461e4e8e10b8ea
3
+ size 9955588840
model-00004-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d32f90d60f575d3adeed535adf6c85da2b081d90ac76cb6f10dce4b70420dce7
3
+ size 9955588824
model-00005-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:56b6762edc0e99610b310f143c3ea991c88719324964c16b34cfc3c614d94294
3
+ size 9955588840
model-00006-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6681731e94f10752f1859898fcc9c8b33611610bd130b9cd300dd92d9d9a032b
3
+ size 9955588840
model-00007-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a48e58631575b4b7c1b55b60034a8d641221147a0912594d3a6adaa3a992f7fc
3
+ size 9955588824
model-00008-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dbbe5c91fd997b20e3f97eece6b42430485ef3e43e67fe4605f39cbaa26fe26b
3
+ size 9955588840
model-00009-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:973797d9edfe69cd81795c4a037e882b041ddd2d1a557614108d6329e206fc3c
3
+ size 9955588824
model-00010-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:607d48aa1451992044c387d56fa2512a9ab4e08fff6906807e1a3920be5434ba
3
+ size 9955588840
model-00011-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e64b71bb86526c4d9ad276c7a30a26c2c66cc183b5303f4e2436bb04eaf8f75
3
+ size 9955588840
model-00012-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f5391af01939b993693df3d09a78460125db086855e2bd6cc5396d7b5ebed6eb
3
+ size 9955588824
model-00013-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:519ba01428f7186632359df899cc7b44e3e07eea9a2c2312220ccf46b45f4af7
3
+ size 9955588840
model-00014-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6dd3270104e01cf9d9d998caada1d59e8bf698d544a8ccfa809fbfdfd65a7403
3
+ size 9955588840
model-00015-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:52dea7ef12656bccfd4617647078b6de593ee50a0c7c5c975509258dc83b37e5
3
+ size 9955588824
model-00016-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea59e25b02306c4988668fe1d9fd1ad203d0ea7abb6968debabb6fdbc818f94a
3
+ size 9955588840
model-00017-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1823d46b18dea8a1848a7165118481c37c64f8a14daab3324d68a7e1e20c1e4
3
+ size 9955588840
model-00018-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f17371fd672c102e228f65ec5bdc915b8b2b54609ad8aae480edbba10ceb4885
3
+ size 9955588824
model-00019-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:465c4d92e11dac9b322a13f8724546df4fe6a8bf1c73a2ee184606e0e326bfbe
3
+ size 9955588840
model-00020-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ebe11dbfdad8c0f10e3f4ca906a35a86c132cbfc2e1ffd95440fb31e971f38b2
3
+ size 9955588824
model-00021-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b4c1eaf33b48b80452bf8dbcb339d7753a7ded492f82b865be81f27704f7e918
3
+ size 9955588840
model-00022-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8f499f54e012c1e8c2c1c990f2b3da6c2db241a5781d36ccfbd6392bf16c2bb3
3
+ size 9955588840
model-00023-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:946d5d76c97ef19e07750d2b1be75c00ff49d56d922bd47d3a0da4295c29ff90
3
+ size 9955588824
model-00024-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eae53aa2855a2a2c32ea7d274534f818ef2ee6bbdfa7d262cd75e723f69e6955
3
+ size 8619507568
model.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff
 
preprocessor_config.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "crop_size": null,
3
+ "data_format": "channels_first",
4
+ "default_to_square": true,
5
+ "device": null,
6
+ "disable_grouping": null,
7
+ "do_center_crop": null,
8
+ "do_convert_rgb": true,
9
+ "do_normalize": true,
10
+ "do_pad": null,
11
+ "do_rescale": true,
12
+ "do_resize": true,
13
+ "image_mean": [
14
+ 0.5,
15
+ 0.5,
16
+ 0.5
17
+ ],
18
+ "image_processor_type": "Qwen2VLImageProcessorFast",
19
+ "image_std": [
20
+ 0.5,
21
+ 0.5,
22
+ 0.5
23
+ ],
24
+ "input_data_format": null,
25
+ "max_pixels": null,
26
+ "merge_size": 2,
27
+ "min_pixels": null,
28
+ "pad_size": null,
29
+ "patch_size": 16,
30
+ "processor_class": "Qwen3VLProcessor",
31
+ "resample": 3,
32
+ "rescale_factor": 0.00392156862745098,
33
+ "return_tensors": null,
34
+ "size": {
35
+ "longest_edge": 16777216,
36
+ "shortest_edge": 65536
37
+ },
38
+ "temporal_patch_size": 2
39
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|object_ref_start|>",
6
+ "<|object_ref_end|>",
7
+ "<|box_start|>",
8
+ "<|box_end|>",
9
+ "<|quad_start|>",
10
+ "<|quad_end|>",
11
+ "<|vision_start|>",
12
+ "<|vision_end|>",
13
+ "<|vision_pad|>",
14
+ "<|image_pad|>",
15
+ "<|video_pad|>"
16
+ ],
17
+ "eos_token": {
18
+ "content": "<|im_end|>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ "pad_token": {
25
+ "content": "<|vision_pad|>",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ }
31
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aeb13307a71acd8fe81861d94ad54ab689df773318809eed3cbe794b4492dae4
3
+ size 11422654
tokenizer_config.json ADDED
@@ -0,0 +1,242 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ },
181
+ "151665": {
182
+ "content": "<tool_response>",
183
+ "lstrip": false,
184
+ "normalized": false,
185
+ "rstrip": false,
186
+ "single_word": false,
187
+ "special": false
188
+ },
189
+ "151666": {
190
+ "content": "</tool_response>",
191
+ "lstrip": false,
192
+ "normalized": false,
193
+ "rstrip": false,
194
+ "single_word": false,
195
+ "special": false
196
+ },
197
+ "151667": {
198
+ "content": "<think>",
199
+ "lstrip": false,
200
+ "normalized": false,
201
+ "rstrip": false,
202
+ "single_word": false,
203
+ "special": false
204
+ },
205
+ "151668": {
206
+ "content": "</think>",
207
+ "lstrip": false,
208
+ "normalized": false,
209
+ "rstrip": false,
210
+ "single_word": false,
211
+ "special": false
212
+ }
213
+ },
214
+ "additional_special_tokens": [
215
+ "<|im_start|>",
216
+ "<|im_end|>",
217
+ "<|object_ref_start|>",
218
+ "<|object_ref_end|>",
219
+ "<|box_start|>",
220
+ "<|box_end|>",
221
+ "<|quad_start|>",
222
+ "<|quad_end|>",
223
+ "<|vision_start|>",
224
+ "<|vision_end|>",
225
+ "<|vision_pad|>",
226
+ "<|image_pad|>",
227
+ "<|video_pad|>"
228
+ ],
229
+ "bos_token": null,
230
+ "clean_up_tokenization_spaces": false,
231
+ "eos_token": "<|im_end|>",
232
+ "errors": "replace",
233
+ "extra_special_tokens": {},
234
+ "model_max_length": 262144,
235
+ "pad_token": "<|vision_pad|>",
236
+ "padding_side": "left",
237
+ "processor_class": "Qwen3VLProcessor",
238
+ "split_special_tokens": false,
239
+ "tokenizer_class": "Qwen2Tokenizer",
240
+ "unk_token": null,
241
+ "chat_template": "{%- set image_count = namespace(value=0) %}\n{%- set video_count = namespace(value=0) %}\n{%- macro render_content(content, do_vision_count) %}\n {%- if content is string %}\n {{- content }}\n {%- else %}\n {%- for item in content %}\n {%- if 'image' in item or 'image_url' in item or item.type == 'image' %}\n {%- if do_vision_count %}\n {%- set image_count.value = image_count.value + 1 %}\n {%- endif %}\n {%- if add_vision_id %}Picture {{ image_count.value }}: {% endif -%}\n <|vision_start|><|image_pad|><|vision_end|>\n {%- elif 'video' in item or item.type == 'video' %}\n {%- if do_vision_count %}\n {%- set video_count.value = video_count.value + 1 %}\n {%- endif %}\n {%- if add_vision_id %}Video {{ video_count.value }}: {% endif -%}\n <|vision_start|><|video_pad|><|vision_end|>\n {%- elif 'text' in item %}\n {{- item.text }}\n {%- endif %}\n {%- endfor %}\n {%- endif %}\n{%- endmacro %}\n{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0].role == 'system' %}\n {{- render_content(messages[0].content, false) + '\\n\\n' }}\n {%- endif %}\n {{- \"# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0].role == 'system' %}\n {{- '<|im_start|>system\\n' + render_content(messages[0].content, false) + '<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}\n{%- for message in messages[::-1] %}\n {%- set index = (messages|length - 1) - loop.index0 %}\n {%- if ns.multi_step_tool and message.role == \"user\" %}\n {%- set content = render_content(message.content, false) %}\n {%- if not(content.startswith('<tool_response>') and content.endswith('</tool_response>')) %}\n {%- set ns.multi_step_tool = false %}\n {%- set ns.last_query_index = index %}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- for message in messages %}\n {%- set content = render_content(message.content, True) %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) %}\n {{- '<|im_start|>' + message.role + '\\n' + content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {%- set reasoning_content = '' %}\n {%- if message.reasoning_content is string %}\n {%- set reasoning_content = message.reasoning_content %}\n {%- else %}\n {%- if '</think>' in content %}\n {%- set reasoning_content = content.split('</think>')[0].rstrip('\\n').split('<think>')[-1].lstrip('\\n') %}\n {%- set content = content.split('</think>')[-1].lstrip('\\n') %}\n {%- endif %}\n {%- endif %}\n {%- if loop.index0 > ns.last_query_index %}\n {%- if loop.last or (not loop.last and reasoning_content) %}\n {{- '<|im_start|>' + message.role + '\\n<think>\\n' + reasoning_content.strip('\\n') + '\\n</think>\\n\\n' + content.lstrip('\\n') }}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- endif %}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- endif %}\n {%- if message.tool_calls %}\n {%- for tool_call in message.tool_calls %}\n {%- if (loop.first and content) or (not loop.first) %}\n {{- '\\n' }}\n {%- endif %}\n {%- if tool_call.function %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {%- if tool_call.arguments is string %}\n {{- tool_call.arguments }}\n {%- else %}\n {{- tool_call.arguments | tojson }}\n {%- endif %}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {%- endif %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if loop.first or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n<think>\\n' }}\n{%- endif %}\n"
242
+ }
video_preprocessor_config.json ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "crop_size": null,
3
+ "data_format": "channels_first",
4
+ "default_to_square": true,
5
+ "device": null,
6
+ "do_center_crop": null,
7
+ "do_convert_rgb": true,
8
+ "do_normalize": true,
9
+ "do_rescale": true,
10
+ "do_resize": true,
11
+ "do_sample_frames": true,
12
+ "fps": 2,
13
+ "image_mean": [
14
+ 0.5,
15
+ 0.5,
16
+ 0.5
17
+ ],
18
+ "image_std": [
19
+ 0.5,
20
+ 0.5,
21
+ 0.5
22
+ ],
23
+ "input_data_format": null,
24
+ "max_frames": 768,
25
+ "merge_size": 2,
26
+ "min_frames": 4,
27
+ "num_frames": null,
28
+ "pad_size": null,
29
+ "patch_size": 16,
30
+ "processor_class": "Qwen3VLProcessor",
31
+ "resample": 3,
32
+ "rescale_factor": 0.00392156862745098,
33
+ "return_metadata": false,
34
+ "size": {
35
+ "longest_edge": 25165824,
36
+ "shortest_edge": 4096
37
+ },
38
+ "temporal_patch_size": 2,
39
+ "video_metadata": null,
40
+ "video_processor_type": "Qwen3VLVideoProcessor"
41
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff