Hunyuan-0.5B-Instruct-GGUF

This repository contains GGUF quants for tencent/Hunyuan-0.5B-Instruct.

Hunyuan-0.5B is part of Tencent's efficient LLM series, featuring Hybrid Reasoning (fast and slow thinking modes) and a native 256K context window. Even at 0.5B parameters, it inherits robust performance from larger Hunyuan models, making it ideal for edge devices and resource-constrained environments.

Usage

llama.cpp

You can run these quants using the llama.cpp CLI:

./llama-cli -m Hunyuan-0.5B-Instruct*.gguf -p "Your prompt here" -n 128

Special Features

  • Thinking Mode: This model supports "slow-thinking" reasoning. To disable CoT (Chain of Thought), add /no_think before your prompt or set enable_thinking=False in your chat template.
  • Long Context: Natively supports 256K tokens
Downloads last month
2,014
GGUF
Model size
0.5B params
Architecture
hunyuan-dense
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Fu01978/Hunyuan-0.5B-Instruct-GGUF

Quantized
(19)
this model

Space using Fu01978/Hunyuan-0.5B-Instruct-GGUF 1