Model Card for qwen3-4b-instruct-stat-qlora-v2
This model is a QLoRA fine-tuned variant of Qwen3-4B-Instruct, specialized for explaining statistical test pipeline outputs in a clear, structured, and technically correct way. It is designed to transform structured tool_json outputs (e.g. T-Test, ANOVA, correlation, clustering, chi-square results) into high-quality natural language explanations following a strict five-section analytical format. Its' goal is to serve my personal project app available on spaces. The fine-tuning process focused on reducing hallucinations, improving methodological correctness, and strengthening interpretability, while preserving the general instruction-following abilities of the base model.
Model Details
Model Description
- Developed by: João Vaz, Independent research project
- Shared by: Ozymandias2
- Model type: Instruction-tuned causal language model with LoRA adapters
- Language(s) (NLP): English
- License: Same as base model (Qwen/Qwen3-4B-Instruct-2507)
- Finetuned from model: Qwen/Qwen3-4B-Instruct-2507
This model was trained to generate structured statistical explanations using the following fixed template: -Missing Data Analysis -Pre-Test Diagnostics -Test Selection Rationale -Test Results -Interpretation
The model explicitly avoids: -causal language for observational analyses, -hallucinated preprocessing steps, -incorrect test naming or directionality.
Model Sources
- Repository: https://github.com/JoaoLAVaz/data-chat-assistant/tree/v1/result_explainer_study
- Paper : https://github.com/JoaoLAVaz/data-chat-assistant/blob/v1/result_explainer_study/README.md
- Demo (spaces, feel free to restart and test it): https://huggingface.co/spaces/Ozymandias2/data-chat-assistant
Framework versions
- PEFT 0.18.0
- Transformers ≥ 4.40
- Downloads last month
- 105
Model tree for Ozymandias2/qwen3-4b-instruct-stat-qlora-v2
Base model
Qwen/Qwen3-4B-Instruct-2507