VoiceCraft GGUF - Ollama Ready

Quantized GGUF versions of VoiceCraft fine-tuned on LinkedIn post generation.

πŸš€ Quick Start

# Download model
huggingface-cli download Manoghn/voicecraft-mistral-7b-gguf --local-dir ./voicecraft

# Quantize to smaller size (recommended)
cd voicecraft
llama.cpp/quantize voicecraft-f16.gguf voicecraft-q4_k_m.gguf q4_k_m

# Import to Ollama
ollama create voicecraft -f Modelfile

# Run
ollama run voicecraft "Generate a LinkedIn post about AI trends"

πŸ“¦ Files

File Size Description Use Case
voicecraft-f16.gguf ~14GB Full 16-bit precision Highest quality, quantize locally
Modelfile - Ollama configuration Pre-configured parameters

Recommended: Download the f16 version and quantize to q4_k_m on your machine (~4GB).

🎯 Model Information

  • Base Model: Mistral-7B-Instruct-v0.2
  • Fine-tuning: Custom LinkedIn post dataset (380 examples)
  • Training: 3 epochs, eval_loss: 0.7598
  • Specialization: 4 LinkedIn post types

Post Types

  1. AI Trend/News - Technical and insightful
  2. Career Learning - Reflective and growth-focused
  3. Project Update - Achievement-focused with metrics
  4. Personal Insight - Authentic and vulnerable

πŸ’» Local Quantization

After downloading, quantize to your preferred size:

# 4-bit (smallest, recommended for most users)
llama.cpp/quantize voicecraft-f16.gguf voicecraft-q4_k_m.gguf q4_k_m

# 5-bit (balanced quality/size)
llama.cpp/quantize voicecraft-f16.gguf voicecraft-q5_k_m.gguf q5_k_m

# 8-bit (higher quality)
llama.cpp/quantize voicecraft-f16.gguf voicecraft-q8_0.gguf q8_0

πŸ”§ Generation Parameters

Recommended settings (already in Modelfile):

  • temperature: 0.8
  • top_p: 0.95
  • repeat_penalty: 1.15
  • num_predict: 300

πŸ“š Usage Example

ollama run voicecraft

>>> Post Type: Personal Insight
>>> Topic: Overcoming failure
>>> Style: Authentic and vulnerable
>>> Generate a LinkedIn post

πŸ”— Related

πŸ“„ License

Apache 2.0 (inherited from Mistral-7B)

πŸ™ Acknowledgments

  • Mistral AI for the base model
  • Hugging Face for training infrastructure
  • llama.cpp for GGUF conversion tools
Downloads last month
28
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Manoghn/voicecraft-mistral-7b-gguf

Quantized
(95)
this model