Intruduction
We introduce Xmodel-LM1.5, a 1-billion-parameter multilingual large language model pretrained on 2 trillion tokens, designed for balanced performance and scalability. Unlike most large models that use the BPE tokenizer, Xmodel-LM1.5 employs a custom unigram tokenizer with 65,280 tokens, optimizing both efficiency and accuracy. The model delivers competitive results across multiple languages, including Thai, Arabic, French, Chinese, and English, outperforming Alibaba’s PolyLM-1.7B on respective evaluation datasets. Xmodel-LM1.5 excels in benchmarks like mMMLU and PIQA, and achieves state-of-the-art results in Thai. To support low-resource language research, we release \textbf{Xdata_Thai}, a Thai-specific evaluation dataset featuring unique linguistic challenges such as gendered particles and idioms. While the model demonstrates strong performance, there is still room for improvement in handling culturally specific nuances. We hope this work contributes to advancements in multilingual AI research. Refer to our paper and github for more details!
Instruct Models
We offer both the pretrained model, Xmodel_LM1.5, and the instruction-tuned model, which has been trained exclusively on Chinese and English data.