1

Qwen3-VisionCaption-2B-Thinking

Qwen3-VisionCaption-2B-Thinking is an abliterated v1.0 variant built upon Qwen3-VL-2B-Instruct-abliterated-v1, which originates from the Qwen3-VL-2B-Instruct architecture. It is specifically optimized for seamless, high precision image captioning and uncensored visual analysis. The model is engineered for robust caption generation, deep reasoning, and unrestricted descriptive understanding across diverse visual and multimodal contexts.

Key Highlights

  • Abliterated and uncensored captioning for descriptive and reasoning focused outputs.
  • High fidelity captions suitable for general, artistic, technical, synthetic, abstract, and low context images.
  • Consistent performance across wide, tall, square, panoramic, and irregular visual formats.
  • Adjustable detail control ranging from concise summaries to fine grained reasoning.
  • Built upon Qwen3-VL-2B architecture with enhanced multimodal reasoning and instruction following.
  • Multilingual output capability through effective prompt engineering.

Datasets

This model was fine tuned on the following datasets:

  • prithivMLmods/blip3o-caption-mini-arrow A high quality curated dataset with multi style captions oriented toward descriptive and reasoning rich visual interpretation.

  • prithivMLmods/Caption3o-Opt-v2 Optimized caption dataset targeting precision, context understanding, and descriptive generalization across diverse visual categories.

  • Private and unlisted datasets curated for uncensored and domain specific image captioning tasks, enabling unrestricted visual understanding beyond standard filtered datasets.

The training objective focused on improving performance in unconstrained descriptive image captioning, particularly for edge cases and visual categories that are typically filtered out in standard captioning benchmarks.

Quick Start with Transformers

from transformers import Qwen3VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch

model = Qwen3VLForConditionalGeneration.from_pretrained(
    "prithivMLmods/Qwen3-VisionCaption-2B-Thinking", torch_dtype="auto", device_map="auto"
)

processor = AutoProcessor.from_pretrained("prithivMLmods/Qwen3-VisionCaption-2B-Thinking")

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
            },
            {"type": "text", "text": "Provide a detailed caption and reasoning for this image."},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)

inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
)
inputs = inputs.to("cuda")

generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
    out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)

Run with llama.cpp on Jan, Ollama, LM Studio, and other platforms.

Screenshot 2025-12-03 211152

GGUF: https://huggingface.co/prithivMLmods/Qwen3-VisionCaption-2B-Thinking-GGUF

Intended Use

  • High precision captioning and reasoning for general purpose or non standard visual data.
  • Uncensored analytical captioning for research, red teaming, and moderation evaluation.
  • Creative and narrative oriented multimodal tasks.
  • Understanding stylized, synthetic, or complex images with challenging aspect ratios.

Limitations

  • May produce explicit, sensitive, or offensive descriptions depending on visual content.
  • Not recommended for production environments requiring strict safety controls.
  • Performance may vary for heavily abstract or synthetic content.
  • Output tone depends on prompt phrasing and detail level settings.
Downloads last month
64
Safetensors
Model size
2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Qwen3-VisionCaption-2B-Thinking

Finetuned
(1)
this model
Quantizations
3 models

Datasets used to train prithivMLmods/Qwen3-VisionCaption-2B-Thinking