🧠 Teo T4-FAST (3B)

Open. Fast. Free.
A lightweight, high-performance instruction-tuned model β€” optimized for real-world deployment.

βœ… 100% open weights (Apache 2.0)
βœ… No API key needed β€” run anywhere
βœ… Quantized GGUF versions coming soon
βœ… Trained on clean, curated data

⚠️ Note: This repo is a placeholder. Full weights & GGUFs will be uploaded soon.
Stay tuned β€” or join the waitlist!


πŸ“Š Planned Specs

Parameter Value
Architecture Decoder-only Transformer
Parameters ~3.1B
Context 4,096 tokens
License Apache 2.0
Inference transformers, llama.cpp, Ollama, WebLLM

πŸš€ Coming Soon

  • PyTorch weights (.safetensors)
  • GGUF quantized versions (Q4_K_M, Q5_K_M)
  • Hugging Face demo
  • Browser demo (WebLLM)

πŸ“£ Want Early Access?

Visit: https://tinoai.wuaze.com
Follow progress. Give feedback. Be part of the launch.


Β© 2025 TeoAI β€” The future is open.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support