Built with Axolotl

See axolotl config

axolotl version: 0.13.0.dev0

# ===== Modelo =====
base_model: meta-llama/Llama-3.1-8B
tokenizer_type: AutoTokenizer
trust_remote_code: true

# Llama 3.1 é derivado de Llama, isso ajuda Axolotl a aplicar otmizações corretas
is_llama_derived_model: true

# Template de conversa
chat_template: chatml

plugins:
  - axolotl.integrations.liger.LigerPlugin

special_tokens:
  pad_token: "<|eot_id|>"

# ===== Dataset (Nemotron Post-Training SFT) =====
datasets:
  - path: nvidia/Llama-Nemotron-Post-Training-Dataset
    name: SFT           # subset da HF
    split: chat         # você pode duplicar este bloco para math_v1.1, science, etc.
    type: chat_template
    field_messages: input            # coluna com a lista de {role, content}
    # Se os campos já forem "role" e "content", não precisa do mapping abaixo.
    message_property_mappings:
      role: role
      content: content
    # A coluna "output" é a resposta; o Axolotl converte input+output em conversa interna.
    field_output: output

# Não treinar nos tokens do usuário/system
train_on_inputs: false

# ===== Comprimento de contexto =====
sequence_len: 8192
eval_sequence_len: 8192
pad_to_sequence_len: true
sample_packing: true
sample_packing_group_size: 100000
sample_packing_bin_size: 200
group_by_length: true

# ===== Batch / epochs – hiperparâmetros do paper =====
micro_batch_size: 1               # per-device batch size
gradient_accumulation_steps: 8    # 4 GPUs -> effective batch = 32
num_epochs: 2

# (opcional) se quiser deixar explícito que você tem 4 GPUs para DP
# dp_shard_size: 4

# ===== Otimizador / LR =====
learning_rate: 2.0e-5
optimizer: adamw_torch_fused
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1.0e-8

lr_scheduler: cosine
warmup_steps: 100
weight_decay: 0.0   # paper não especifica, então deixei 0.0 (padrão)

# ===== Precisão / memória =====
bf16: true          # ou "auto" se preferir
tf32: true
gradient_checkpointing: true
activation_offloading: false

# ===== Eval / logging / checkpoints =====
val_set_size: 0.01          # 1% do dataset para validação (ajuste se quiser)
eval_strategy: steps
eval_steps: 100

save_strategy: steps
save_steps: 100
save_total_limit: 3
save_only_model: false
save_safetensors: true
load_best_model_at_end: true
metric_for_best_model: eval_loss
greater_is_better: false

logging_steps: 10

# ===== Saída / reproducibilidade / tracking =====
output_dir: ./outputs/llama31_8b_nemotron_full_sft
seed: 42

use_wandb: true
wandb_project: "llama31_nemotron_sft"
wandb_name: "llama31-8b-full-sft-chatml"

outputs/llama31_8b_nemotron_full_sft

This model is a fine-tuned version of meta-llama/Llama-3.1-8B on the nvidia/Llama-Nemotron-Post-Training-Dataset dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6628
  • Memory/max Active (gib): 61.65
  • Memory/max Allocated (gib): 61.65
  • Memory/device Reserved (gib): 88.96

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 8
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 100
  • training_steps: 570

Training results

Training Loss Epoch Step Validation Loss Active (gib) Allocated (gib) Reserved (gib)
No log 0 0 3.7258 27.81 27.81 28.15
1.2797 0.3498 100 1.2345 61.65 61.65 87.65
0.9685 0.6996 200 0.9419 61.65 61.65 88.96
0.5627 1.0490 300 0.7959 61.65 61.65 88.27
0.4859 1.3988 400 0.6849 61.65 61.65 88.96
0.4636 1.7486 500 0.6628 61.65 61.65 88.96

Framework versions

  • Transformers 4.57.1
  • Pytorch 2.9.0+cu130
  • Datasets 4.3.0
  • Tokenizers 0.22.1
Downloads last month
13
Safetensors
Model size
8B params
Tensor type
F32
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for cemig-temp/llama3.1-8B-base-data-nemotron

Finetuned
(1662)
this model

Dataset used to train cemig-temp/llama3.1-8B-base-data-nemotron

Evaluation results