Open-source package for No-code LLM Fine-Tuning and Data Sanitization

Hey everyone,

I just published a pre-release of Upasak (GitHub - shrut2702/upasak: UI-based Fine-Tuning for Large Language Models (LLMs)), a Python package, for UI-based LLM fine-tuning or continued pretraining. It will allow you to select an LLM (currently Gemma-3), upload your own dataset or select from Hugging Face hub, sanitize your data to remove PII, customize hyperparameters, enable LoRA, train your model and monitor your experiment, along with an option to push your fine-tuned model to Hugging Face hub.

For more detailed tutorial: https://youtu.be/vccPQimdXUc?si=WeMjLwro_ItoKmLm

Would love for you to try it and share honest feedback!
Thanks!

2 Likes