Hugging Face Forums
Retraining a peft model after loading
Beginners
John6666
February 15, 2025, 7:56am
3
load_in_8bit=True, # Load in 8-bit precision
Maybe it’s because of the quantization.
show post in topic
Related topics
Topic
Replies
Views
Activity
I used to have no problem with PEFT fine-tuning after hundreds of trainings, but now I have encountered The error RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
🤗AutoTrain
2
154
October 1, 2025
Correct way to save/load adapters and checkpoints in PEFT
🤗Transformers
10
17057
September 8, 2025
Loading Peft model from checkpoint leading into size missmatch
🤗Transformers
6
11191
February 7, 2024
Freezing layers with SFTTrainer
Intermediate
2
335
March 8, 2025
Having trouble loading a fine-tuned PEFT model (CodeLlama-13b-Instruct-hf base)
🤗Transformers
2
4543
October 6, 2024