Quantized DeepSeek-Coder-V2-Lite-Instruct Models
This repository provides optimized GGUF quantized versions of the DeepSeek-Coder-V2-Lite-Instruct model. These lightweight 4-bit and 5-bit variants maintain the strong reasoning and code generation capabilities of the original model, while drastically reducing compute and memory requirements, ideal for local inference or edge deployment.
Model Overview
- Original Model: DeepSeek-Coder-V2-Lite-Instruct
- Quantized Versions:
- Q4_K_M (4-bit quantization)
- Q5_K_M (5-bit quantization)
- Architecture: Decoder-only Transformer
- Base Model: DeepSeek-Coder-V2-Lite
- Modalities: Text only
- Developer: DeepSeek-AI
- License: deepseek-license
- Language: English, Chinese
Quantization Details
Q4_K_M Version
- Approx. 9.66 GB memory footprint around 70% model size reduction
- Best suited for consumer GPUs or CPU-based inference
- Slight accuracy trade-off for maximum efficiency
Q5_K_M Version
- Approx. 11.04 GB footprint around 65% size reduction
- Near full precision quality retention
- Ideal for scenarios requiring balanced speed and accuracy
Key Features
- State-of-the-art reasoning and code generation performance
- Optimized for multi-turn code completion and debugging assistance
- Fine-tuned on extensive programming and instruction datasets
- Strong performance in reasoning, debugging, and natural language coding tasks
- Supports long-context generation up to 32K tokens
Usage
This model is suited for developers and researchers working on program synthesis, code explanation, or interactive coding assistants.
llama.cpp (text-only)
./llama-cli -hf SandLogicTechnologies/DeepSeek-Coder-V2-Lite-Instruct-GGUF -p "Write a Python function to reverse a linked list."
Model Data
Dataset Overview
- The DeepSeek-Coder-V2-Lite-Instruct model was trained on large-scale, high-quality open datasets consisting of:
- General programming languages (Python, C++, Java, Rust, etc.)
- Instruction-following data for reasoning and conversation
- StackExchange and competitive coding datasets
- Synthetic reasoning datasets for enhanced contextual understanding
Recommended Use Cases
- AI Coding Assistants : - Build real-time code assistants with low-latency responses.
- Code Analysis & Debugging: - Detect, explain, or correct programming errors.
- Educational Platforms : - Support interactive programming tutorials and practice systems.
- Edge & Low-resource Deployment : - Run code-capable LLMs on devices with limited memory (e.g., laptops, Jetson, Raspberry Pi).
Acknowledgments
These quantized models are derived from the original DeepSeek-Coder-V2-Lite-Instruct developed by DeepSeek-AI.
Special thanks to:
- The deepseek team for developing and releasing the DeepSeek-Coder-V2-Lite-Instruct model.
- Georgi Gerganov and the entire
llama.cppopen-source community for enabling efficient model quantization and inference via the GGUF format.
Contact
For any inquiries or support, please contact us at [email protected] or visit our Website.
- Downloads last month
- 533
4-bit
5-bit
Model tree for SandLogicTechnologies/DeepSeek-Coder-V2-Lite-Instruct-GGUF
Base model
deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct