|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
--- |
|
# InsTagger |
|
|
|
**InsTagger** is an tool for automatically providing instruction tags by distilling tagging results from **InsTag**. |
|
|
|
InsTag aims analyzing supervised fine-tuning (SFT) data in LLM aligning with human preference. For local tagging deployment, we release InsTagger, fine-tuned on InsTag results, to tag the queries in SFT data. Through the scope of tags, we sample a 6K subset of open-resourced SFT data to fine-tune LLaMA and LLaMA-2 and the fine-tuned models TagLM-13B-v1.0 and TagLM-13B-v2.0 outperform many open-resourced LLMs on MT-Bench. |
|
|
|
|
|
### Model Description |
|
|
|
- **Model type:** Auto-regressive Models |
|
- **Language(s) (NLP):** English |
|
- **License:** apache-2.0 |
|
- **Finetuned from model:** LLaMa-2 |
|
|
|
### Model Sources [optional] |
|
|
|
- **Repository:** [https://github.com/OFA-Sys/InsTag](https://github.com/OFA-Sys/InsTag) |
|
- **Paper:** [Arxiv](https://arxiv.org/pdf/2308.07074.pdf) |
|
- **Demo:** [ModelScope Demo](https://www.modelscope.cn/studios/lukeminglkm/instagger_demo/summary) |
|
|
|
## Uses |
|
|
|
This model is directly developed with [FastChat](https://github.com/lm-sys/FastChat). So it can be easily infer or serve with FastChat selecting the vicuna template. |