Deepseek-V3-0324-W4AFP8

Model Overview

  • Model Architecture: DeepseekV3ForCausalLM
    • Input: Text
    • Output: Text
  • Model Optimizations:
    • Dense Weight quantization: FP8
    • MOE Weight quantization: INT4
    • Activation quantization: FP8
  • Release Date: 25/10/2025
  • Version: 1.0

Quantized version of deepseek-ai/Deepseek-V3-0324

Model MMLU
novita/Deepseek-V3-0324-W4AFP8 0.8734

Model Optimizations

These models were obtained by quantizing the weights and activations of DeepSeek models to mixed-precision data types (W4(int)A(FP)8 for MoE layers and FP8 for dense layers). This optimization reduces the number of bits per parameter 4/8, significantly reducing GPU memory requirements.

Use with SGLANG

This model can be deployed efficiently using the SGLANG backend with only H200x4, as shown in the example below.

python -m sglang.launch_server --model novita/Deepseek-V3-0324-W4AFP8 --mem-fraction-static 0.85 --disable-shared-experts-fusion --tp-size 4
Downloads last month
333
Safetensors
Model size
349B params
Tensor type
F32
·
BF16
·
F8_E4M3
·
I8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support