|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- togethercomputer/RedPajama-Data-1T |
|
- yahma/alpaca-cleaned |
|
--- |
|
|
|
# OpenLLaMA: An Open Reproduction of LLaMA with Alpaca Lora Instruction Fine Tuning |
|
|
|
- A permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. |
|
- A permissively licensed open source fine-tuning using [Alpaca Lora](https://github.com/tloen/alpaca-lora) |
|
|
|
## Dataset and Training |
|
|
|
- Base model from [openlm-research/open_llama_7b](https://huggingface.co/openlm-research/open_llama_7b) |
|
- Instruction fine-tuned using [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned) |