Uploaded model
- Developed by: diegoakel
- License: apache-2.0
- Finetuned from model : unsloth/Llama-3.2-1B-bnb-4bit
The notebook to train the model is available here. it is the Llama 3.2 1B base model (the unsloth Version) finetuned to write Python code with the iamtarun/python_code_instructions_18k_alpaca dataset.
I wrote about the process on my blog, here.
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
Model tree for diegoakel/llama3.2-1B-PythonInstruct
Base model
meta-llama/Llama-3.2-1B
Quantized
unsloth/Llama-3.2-1B-bnb-4bit