|
--- |
|
language: |
|
- en |
|
license: llama3 |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- unsloth |
|
- llama |
|
- gguf |
|
base_model: unsloth/llama-3-8b-bnb-4bit |
|
datasets: |
|
- 922-Narra/tagaloguanaco_cleaned_03152024 |
|
--- |
|
# Llama-3-8b-tagalog-v1: |
|
* Test model fine-tuned on [this dataset](https://huggingface.co/datasets/922-Narra/tagaloguanaco_cleaned_03152024) |
|
* Base: LLaMA-3 8b |
|
* [GGUFs](https://huggingface.co/922-Narra/Llama-3-8b-tagalog-v1-gguf) |
|
|
|
### USAGE |
|
This is meant to be mainly a chat model. |
|
|
|
Use "Human" and "Assistant" and prompt with Tagalog: |
|
|
|
"\nHuman: INPUT\nAssistant:" |
|
|
|
### HYPERPARAMS |
|
* Trained for 1 epochs |
|
* rank: 32 |
|
* lora alpha: 32 |
|
* lr: 2e-4 |
|
* batch size: 2 |
|
* grad steps: 4 |
|
|
|
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. |
|
|
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
|
|
|
### WARNINGS AND DISCLAIMERS |
|
Note that there is a chance that the model may switch back to English (albeit still understand Tagalog inputs) or output clunky results. |
|
|
|
Finally, this model is not guaranteed to output aligned or safe outputs nor is it meant for production use - use at your own risk! |