Load 4bit models 4x faster
Collection
Native bitsandbytes 4bit pre quantized models
•
25 items
•
Updated
•
48
We have a free Google Colab Tesla T4 notebook for Mistral Nemo 12b here: https://colab.research.google.com/drive/17d3U-CAIwzmbDRqbZ9NnpHxCkmXB6LZ0?usp=sharing
All notebooks are beginner friendly! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
Unsloth supports | Free Notebooks | Performance | Memory use |
---|---|---|---|
Llama-3 8b | ▶️ Start on Colab | 2.4x faster | 58% less |
Gemma 7b | ▶️ Start on Colab | 2.4x faster | 58% less |
Mistral 7b | ▶️ Start on Colab | 2.2x faster | 62% less |
Llama-2 7b | ▶️ Start on Colab | 2.2x faster | 43% less |
TinyLlama | ▶️ Start on Colab | 3.9x faster | 74% less |
CodeLlama 34b A100 | ▶️ Start on Colab | 1.9x faster | 27% less |
Mistral 7b 1xT4 | ▶️ Start on Kaggle | 5x faster* | 62% less |
DPO - Zephyr | ▶️ Start on Colab | 1.9x faster | 19% less |