Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
burtenshaw 
posted an update Aug 12
Post
1418
SFT + Quantisation + Unsloth is a super easy way of squeezing extra performance out of an LLM at low latencies. Here are some hand y resources to bootstrap your projects.

Here's a filtered dataset from Helpsteer2 with the most correct and coherent samples: burtenshaw/helpsteer-2-plus
This is a SFT finetuned model: ttps://huggingface.co/burtenshaw/gemma-help-tiny-sft
This is the notebook I use to train the model: https://colab.research.google.com/drive/17oskw_5lil5C3jCW34rA-EXjXnGgRRZw?usp=sharing
Here's a load of Unsloth notebook on finetuning and inference: https://docs.unsloth.ai/get-started/unsloth-notebooks
In this post