Built with Axolotl

See axolotl config

axolotl version: 0.12.2

base_model: /lustre/fswork/projects/rech/qwv/udv55np/Gemma/base/gemma-3-4b

datasets:
- path: /lustre/fswork/projects/rech/qwv/udv55np/dataset/ift/Nemotron-Super-49B-v1_5/thinking
  ds_type: json
  type: chat_template
  field_messages: conversations
  data_files:
  - /lustre/fswork/projects/rech/qwv/udv55np/dataset/ift/Nemotron-Super-49B-v1_5/thinking/0007.jsonl
  - /lustre/fswork/projects/rech/qwv/udv55np/dataset/ift/Nemotron-Super-49B-v1_5/thinking/0009.jsonl
  - /lustre/fswork/projects/rech/qwv/udv55np/dataset/ift/Nemotron-Super-49B-v1_5/thinking/0005.jsonl
  - /lustre/fswork/projects/rech/qwv/udv55np/dataset/ift/Nemotron-Super-49B-v1_5/thinking/0006.jsonl
  - /lustre/fswork/projects/rech/qwv/udv55np/dataset/ift/Nemotron-Super-49B-v1_5/thinking/0014.jsonl
  - /lustre/fswork/projects/rech/qwv/udv55np/dataset/ift/Nemotron-Super-49B-v1_5/thinking/0010.jsonl
  - /lustre/fswork/projects/rech/qwv/udv55np/dataset/ift/Nemotron-Super-49B-v1_5/thinking/0012.jsonl
  - /lustre/fswork/projects/rech/qwv/udv55np/dataset/ift/Nemotron-Super-49B-v1_5/thinking/0008.jsonl
  - /lustre/fswork/projects/rech/qwv/udv55np/dataset/ift/Nemotron-Super-49B-v1_5/thinking/0001.jsonl
  - /lustre/fswork/projects/rech/qwv/udv55np/dataset/ift/Nemotron-Super-49B-v1_5/thinking/0002.jsonl
  - /lustre/fswork/projects/rech/qwv/udv55np/dataset/ift/Nemotron-Super-49B-v1_5/thinking/0013.jsonl
  - /lustre/fswork/projects/rech/qwv/udv55np/dataset/ift/Nemotron-Super-49B-v1_5/thinking/0015.jsonl
  - /lustre/fswork/projects/rech/qwv/udv55np/dataset/ift/Nemotron-Super-49B-v1_5/thinking/0004.jsonl
  - /lustre/fswork/projects/rech/qwv/udv55np/dataset/ift/Nemotron-Super-49B-v1_5/thinking/0011.jsonl
  - /lustre/fswork/projects/rech/qwv/udv55np/dataset/ift/Nemotron-Super-49B-v1_5/thinking/0000.jsonl
  - /lustre/fswork/projects/rech/qwv/udv55np/dataset/ift/Nemotron-Super-49B-v1_5/thinking/0003.jsonl

shuffle_merged_datasets: false
shuffle_before_merging_datasets: true
dataset_prepared_path: /lustre/fswork/projects/rech/dgo/udv55np/dataset_gemma/Nemotron-Super-49B-v1_5/split_1
tokenizer_config: "/lustre/fswork/projects/rech/qwv/udv55np/Gemma/base/gemma-3-27b"
chat_template: gemma3
eot_tokens:
  - "<end_of_turn>"

output_dir: /lustre/fswork/projects/rech/dgo/udv55np/ift/Nemotron-Super-49B-v1_5/gemma-3-4b/1

sequence_len: 16384
sample_packing: true
sample_packing_sequentially: true
curriculum_sampling: true

gradient_accumulation_steps: 1
micro_batch_size: 1
num_epochs: 0.6
auto_resume_from_checkpoints: true

optimizer: adamw_torch_fused
lr_scheduler: warmup_stable_decay
learning_rate: 5e-6
lr_scheduler_kwargs:
  num_decay_steps: 200
  min_lr_ratio: 0.1
warmup_steps: 100

bf16: true
tf32: false

gradient_checkpointing: true
logging_steps: 10
flash_attention: true

evals_per_epoch: 0
saves_per_epoch: 1
save_total_limit: 20
save_only_model: true

dataset_processes: 32
dataloader_num_workers: 2

use_tensorboard: true
deepspeed: /lustre/fswork/projects/rech/qwv/udv55np/axolotl/zero3.json

lustre/fswork/projects/rech/dgo/udv55np/ift/Nemotron-Super-49B-v1_5/gemma-3-4b/1

This model was trained from scratch on the None dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 16
  • total_train_batch_size: 16
  • total_eval_batch_size: 16
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: warmup_stable_decay
  • lr_scheduler_warmup_steps: 100
  • training_steps: 3723

Training results

Framework versions

  • Transformers 4.55.2
  • Pytorch 2.6.0+cu124
  • Datasets 4.0.0
  • Tokenizers 0.21.1
Downloads last month
13
Safetensors
Model size
769k params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support