Hello, I have a model that works well for my assistant. I want to finetune this model - however, given that this can get a bit pricey, I don’t want to waste money on doing that only to find out I won’t be able to select it as one of the options for my assistant.
So, is it possible to use a finetuned model for an assistant?
Hi,
If you would like to fine-tune a reasonably sized model at a lower cost, especially from a notebook on Google Colab, I invite you to take a look at how to use Unsloth.ai.
From the README.md :
Link to GitHub : GitHub - unslothai/unsloth: Finetune Llama 3, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
1 Like