Magic Model πͺ
Fine-tuned language model for MMLU-style question answering.
Developed by Likhon Sheikh π
Features
- β Multi-safetensor support
- β Fast tokenizer with tokenizer.json
- β LoRA fine-tuning for efficiency
- β MMLU-optimized responses
- β Production-ready deployment
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("fariasultanacodes/magic")
tokenizer = AutoTokenizer.from_pretrained("fariasultanacodes/magic")
prompt = "Question: What is AI?\n\nAnswer:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Pipeline Usage
from transformers import pipeline
generator = pipeline("text-generation", model="fariasultanacodes/magic")
result = generator("Question: Explain machine learning.\n\nAnswer:")
print(result[0]['generated_text'])
Model Details
- Base Model: Qwen/Qwen2.5-1.5B
- Fine-tuning: LoRA adapters
- Dataset: MMLU-style questions
- Format: Safetensors (multi-file support)
- Tokenizer: Fast tokenizer with JSON
Citation
@misc{magic-model-2025,
title={Magic: MMLU-Optimized Language Model},
author={Likhon Sheikh},
year={2025},
url={https://huggingface.co/fariasultanacodes/magic}
}
License
Apache-2.0
π Developed by Likhon Sheikh
Model tree for fariasultanacodes/magic
Base model
Qwen/Qwen2.5-1.5B