|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- teknium/OpenHermes-2.5 |
|
- CohereForAI/aya_dataset |
|
- jondurbin/airoboros-3.2 |
|
- m-a-p/COIG-CQIA |
|
- hfl/ruozhiba_gpt4 |
|
- hkust-nlp/dart-math-hard |
|
- ise-uiuc/Magicoder-Evol-Instruct-110K |
|
--- |
|
# Model Card for FuxiTranyu-8B-SFT |
|
|
|
## Model Summary |
|
|
|
FuxiTranyu-8B is an **open-source** **multilingual large language model** trained from scratch, with a specific focus on the multilinguality. It is trained on 600B tokens with a balanced data distribution across languages, exhibiting remarkable multilingual performance compared to previous multilingual LLMs like BLOOM-7B, PolyLM-13B. |
|
|
|
FuxiTranyu supports 43 natural languages (Arabic, Bengali, Bulgarian, Burmese, Catalan, Chinese, Czech, Dutch, English, Filipino, Finnish, French, German, Greek, Hebrew, Hindi, Hungarian, Indonesian, Italian, Japanese, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Malay, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Swedish, Tamil, Tajik, Thai, Turkish, Turkmen, Ukrainian, Urdu, Uzbek, and Vietnamese) and cover 16 programming languages (Java, JavaScript, Python, PHP, C, C++, C#, TypeScript, Go, SQL, Rust, Ruby, Scala, Lua, Assembly, and Visual Basic). |
|
|
|
FuxiTranyu-8B-SFT is an instruct fine-tuned version of [FuxiTranyu-8B](https://huggingface.co/TJUNLP/FuxiTranyu-8B) model. |
|
|
|
More details on the data collection & processing, pretraining and fine-tuning of FuxiTranyu can be found in the technical report. |
|
|
|
## Usage |
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
model_path = "TJUNLP/FuxiTranyu-8B-DPO" |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) |
|
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto", torch_dtype='auto', trust_remote_code=True) |
|
|
|
messages = [{"role": "user", "content": "This is an input text:"}] |
|
# format messages with the ChatML chat template |
|
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(model.device) |
|
# <|im_start|>user\nThis is an input text:<|im_end|>\n<|im_start|>assistant\n |
|
|
|
output_ids = model.generate(input_ids, max_new_tokens=20) |
|
response = tokenizer.decode(output_ids[0], skip_special_tokens=True) |
|
|
|
print(response) |
|
``` |
|
|
|
## Citation info |
|
|
|
```bibtex |
|
@misc{FuxiTranyu8B, |
|
title={FuxiTranyu: A Multilingual Large Language Model Trained with Balanced Data}, |
|
author={Haoran Sun, Renren Jin, Shaoyang Xu, Leiyu Pan, Supryadi, Menglong Cui, Jiangcun Du, Yikun Lei, Lei Yang, Ling Shi, Juesi Xiao, Shaolin Zhu, and Deyi Xiong}, |
|
year={2024}, |
|
eprint={2408}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |