Edit model card

Model Card for FuxiTranyu-8B

Model Summary

FuxiTranyu-8B is an open-source multilingual large language model trained from scratch, with a specific focus on the multilinguality. It is trained on 600B tokens with a balanced data distribution across languages, exhibiting remarkable multilingual performance compared to previous multilingual LLMs like BLOOM-7B, PolyLM-13B.

FuxiTranyu supports 43 natural languages (Arabic, Bengali, Bulgarian, Burmese, Catalan, Chinese, Czech, Dutch, English, Filipino, Finnish, French, German, Greek, Hebrew, Hindi, Hungarian, Indonesian, Italian, Japanese, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Malay, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Swedish, Tamil, Tajik, Thai, Turkish, Turkmen, Ukrainian, Urdu, Uzbek, and Vietnamese) and cover 16 programming languages (Java, JavaScript, Python, PHP, C, C++, C#, TypeScript, Go, SQL, Rust, Ruby, Scala, Lua, Assembly, and Visual Basic).

More details on the data collection & processing, pretraining and fine-tuning of FuxiTranyu can be found in the technical report.

In addition to the base model and its checkpoints, we also release two instruction-tuned variants: SFT version at here, and DPO version at here.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_path = "TJUNLP/FuxiTranyu-8B"

tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)

model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto", torch_dtype='auto', trust_remote_code=True)

input_text = "This is an input text:"

input_ids = tokenizer(input_text, return_tensors='pt').to(model.device)
output_ids = model.generate(**input_ids, max_new_tokens=20)
response = tokenizer.decode(output_ids[0], skip_special_tokens=True)

print(response)

To load an intermedia checkpoint, please spcify the revision. For example:

model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto", torch_dtype='auto', revision='10B', trust_remote_code=True)
# This will load the checkpoint that trained with 10B tokens.

Citation info

@article{FuxiTranyu8B,
      title={FuxiTranyu: A Multilingual Large Language Model Trained with Balanced Data}, 
      author={Haoran Sun, Renren Jin, Shaoyang Xu, Leiyu Pan, Supryadi, Menglong Cui, Jiangcun Du, Yikun Lei, Lei Yang, Ling Shi, Juesi Xiao, Shaolin Zhu, and Deyi Xiong},
      journal={arxiv preprint arXiv:2408.06273},
      year={2024},
      url={https://arxiv.org/abs/2408.06273}
}
Downloads last month
24
Safetensors
Model size
8.09B params
Tensor type
FP16
·
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.