See axolotl config
axolotl version: 0.13.0.dev0
# ─── Configuração para Axolotl fine-tuning full / SFT ───
base_model: meta-llama/Llama-3.2-3B-Instruct
tokenizer_type: AutoTokenizer
trust_remote_code: true
is_llama_derived_model: true
# dataset: lista de conjuntos que você quer usar
datasets:
- path: nvidia/Llama-Nemotron-Post-Training-Dataset
type:
system_prompt: ""
field_instruction: input
field_input: ""
field_output: output
format: |-
{instruction}
split: chat
val_set_size: 0.01 # ou outro valor que fizer sentido
# (ou se você preferir: definir explicitamente um dataset de validação)
# Batch / treino / otimização
micro_batch_size: 1
gradient_accumulation_steps: 8
# ajuste conforme sua GPU / memória
sequence_len: 8192 # ou outro contexto máximo desejado
eval_sequence_len: 8192
pad_to_sequence_len: true # útil se seu dataset tiver diferentes comprimentos
sample_packing: true # útil para efficiency, dependendo do dataset
optimizer: adamw_torch_fused # ou outro disponível
learning_rate: 2.0e-5
weight_decay: 0.0
betas: [0.9, 0.999]
eps: 1.0e-8
lr_scheduler: cosine
warmup_steps: 100
bf16: true # ou fp16 conforme sua infraestrutura
tf32: true
gradient_checkpointing: true
special_tokens:
eos_token: "<|eot_id|>"
pad_token: "<|eot_id|>"
eot_tokens:
- "<|eot_id|>"
roles_to_train:
- assistant
train_on_eos: last
# Salvamento / checkpoints
save_strategy: steps
save_steps: 100
save_total_limit: 3
save_safetensors: true
load_best_model_at_end: true
metric_for_best_model: eval_loss
greater_is_better: false
logging_steps: 10
output_dir: ./outputs/llama32_full_sft_instruct_data_nemotron
seed: 42
outputs/llama32_full_sft_instruct_data_nemotron
This model is a fine-tuned version of meta-llama/Llama-3.2-3B-Instruct on the nvidia/Llama-Nemotron-Post-Training-Dataset dataset.
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 849
Training results
Framework versions
- Transformers 4.57.1
- Pytorch 2.9.0+cu130
- Datasets 4.3.0
- Tokenizers 0.22.1
- Downloads last month
- 13
Model tree for cemig-temp/llama3.2-3b-instruct-data-nemotron
Base model
meta-llama/Llama-3.2-3B-Instruct