VoiceCraft GGUF - Ollama Ready
Quantized GGUF versions of VoiceCraft fine-tuned on LinkedIn post generation.
π Quick Start
# Download model
huggingface-cli download Manoghn/voicecraft-mistral-7b-gguf --local-dir ./voicecraft
# Quantize to smaller size (recommended)
cd voicecraft
llama.cpp/quantize voicecraft-f16.gguf voicecraft-q4_k_m.gguf q4_k_m
# Import to Ollama
ollama create voicecraft -f Modelfile
# Run
ollama run voicecraft "Generate a LinkedIn post about AI trends"
π¦ Files
| File | Size | Description | Use Case |
|---|---|---|---|
voicecraft-f16.gguf |
~14GB | Full 16-bit precision | Highest quality, quantize locally |
Modelfile |
- | Ollama configuration | Pre-configured parameters |
Recommended: Download the f16 version and quantize to q4_k_m on your machine (~4GB).
π― Model Information
- Base Model: Mistral-7B-Instruct-v0.2
- Fine-tuning: Custom LinkedIn post dataset (380 examples)
- Training: 3 epochs, eval_loss: 0.7598
- Specialization: 4 LinkedIn post types
Post Types
- AI Trend/News - Technical and insightful
- Career Learning - Reflective and growth-focused
- Project Update - Achievement-focused with metrics
- Personal Insight - Authentic and vulnerable
π» Local Quantization
After downloading, quantize to your preferred size:
# 4-bit (smallest, recommended for most users)
llama.cpp/quantize voicecraft-f16.gguf voicecraft-q4_k_m.gguf q4_k_m
# 5-bit (balanced quality/size)
llama.cpp/quantize voicecraft-f16.gguf voicecraft-q5_k_m.gguf q5_k_m
# 8-bit (higher quality)
llama.cpp/quantize voicecraft-f16.gguf voicecraft-q8_0.gguf q8_0
π§ Generation Parameters
Recommended settings (already in Modelfile):
- temperature: 0.8
- top_p: 0.95
- repeat_penalty: 1.15
- num_predict: 300
π Usage Example
ollama run voicecraft
>>> Post Type: Personal Insight
>>> Topic: Overcoming failure
>>> Style: Authentic and vulnerable
>>> Generate a LinkedIn post
π Related
- Original Model: Manoghn/voicecraft-mistral-7b
- Base Model: mistralai/Mistral-7B-Instruct-v0.2
π License
Apache 2.0 (inherited from Mistral-7B)
π Acknowledgments
- Mistral AI for the base model
- Hugging Face for training infrastructure
- llama.cpp for GGUF conversion tools
- Downloads last month
- 28
Hardware compatibility
Log In
to view the estimation
16-bit
Model tree for Manoghn/voicecraft-mistral-7b-gguf
Base model
mistralai/Mistral-7B-Instruct-v0.2