File size: 2,701 Bytes
35bd235 c1db185 35bd235 c1db185 35bd235 5f412d6 35bd235 5f412d6 35bd235 5f412d6 35bd235 5f412d6 35bd235 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
---
license: apache-2.0
datasets:
- teknium/OpenHermes-2.5
- CohereForAI/aya_dataset
- jondurbin/airoboros-3.2
- m-a-p/COIG-CQIA
- hfl/ruozhiba_gpt4
- hkust-nlp/dart-math-hard
- ise-uiuc/Magicoder-Evol-Instruct-110K
---
# Model Card for FuxiTranyu-8B-SFT
## Model Summary
FuxiTranyu-8B is an **open-source** **multilingual large language model** trained from scratch, with a specific focus on the multilinguality. It is trained on 600B tokens with a balanced data distribution across languages, exhibiting remarkable multilingual performance compared to previous multilingual LLMs like BLOOM-7B, PolyLM-13B.
FuxiTranyu supports 43 natural languages (Arabic, Bengali, Bulgarian, Burmese, Catalan, Chinese, Czech, Dutch, English, Filipino, Finnish, French, German, Greek, Hebrew, Hindi, Hungarian, Indonesian, Italian, Japanese, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Malay, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Swedish, Tamil, Tajik, Thai, Turkish, Turkmen, Ukrainian, Urdu, Uzbek, and Vietnamese) and cover 16 programming languages (Java, JavaScript, Python, PHP, C, C++, C#, TypeScript, Go, SQL, Rust, Ruby, Scala, Lua, Assembly, and Visual Basic).
FuxiTranyu-8B-SFT is an instruct fine-tuned version of [FuxiTranyu-8B](https://huggingface.co/TJUNLP/FuxiTranyu-8B) model.
More details on the data collection & processing, pretraining and fine-tuning of FuxiTranyu can be found in the [technical report](https://arxiv.org/abs/2408.06273).
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "TJUNLP/FuxiTranyu-8B-DPO"
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto", torch_dtype='auto', trust_remote_code=True)
messages = [{"role": "user", "content": "This is an input text:"}]
# format messages with the ChatML chat template
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(model.device)
# <|im_start|>user\nThis is an input text:<|im_end|>\n<|im_start|>assistant\n
output_ids = model.generate(input_ids, max_new_tokens=20)
response = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(response)
```
## Citation info
```bibtex
@article{FuxiTranyu8B,
title={FuxiTranyu: A Multilingual Large Language Model Trained with Balanced Data},
author={Haoran Sun, Renren Jin, Shaoyang Xu, Leiyu Pan, Supryadi, Menglong Cui, Jiangcun Du, Yikun Lei, Lei Yang, Ling Shi, Juesi Xiao, Shaolin Zhu, and Deyi Xiong},
journal={arxiv preprint arXiv:2408.06273},
year={2024},
url={https://arxiv.org/abs/2408.06273}
}
``` |