---
license: apache-2.0
language:
- ar
- zh
- en
---
# AceGPT
AceGPT is a fully fine-tuned generative text model collection, particularly focused on the Arabic language domain.
This is the repository for the version 2 of the 8B-chat pre-trained model, developed based on [AceGPT-v2-8B](https://huggingface.co/FreedomIntelligence/AceGPT-v2-8B).
---
## Model Details
We have released the AceGPT family of large language models, which is a collection of fully fine-tuned generative text models, ranging from 7B to 70B parameters. Our models include two main categories: AceGPT and AceGPT-chat. AceGPT-chat is an optimized version specifically designed for dialogue applications. It is worth mentioning that our models have demonstrated superior performance compared to all currently available open-source Arabic dialogue models in multiple benchmark tests. Furthermore, in our human evaluations, our models have shown comparable satisfaction levels to some closed-source models, such as ChatGPT, in the Arabic language.
## Model Developers
We are from the King Abdullah University of Science and Technology (KAUST), the Chinese University of Hong Kong, Shenzhen (CUHKSZ) and the Shenzhen Research Institute of Big Data (SRIBD).
## Variations
AceGPT families come in a range of parameter sizes —— 7B, 8B, 13B, 32B and 70B, each size of model has a base category and a -chat category.
## Paper
The paper can be accessed at [link](https://huggingface.co/FreedomIntelligence/AceGPT-v2-70B-Chat/blob/main/Alignment_at_Pre_training__a_Case_Study_of_Aligning_LLMs_in_Arabic.pdf).
## Input
Models input text only.
## Output
Models output text only.
## Model Evaluation Results
Benchmark evaluations are conducted using accuracy or F1 scores as metrics, following the evaluation framework available at https://github.com/FreedomIntelligence/AceGPT/tree/main.
([**ArabicMMLU**](https://github.com/mbzuai-nlp/ArabicMMLU) is assessed based on its source settings.)
| | [MMLU (Huang et al. (2023))](https://github.com/FreedomIntelligence/AceGPT) | [ArabicMMLU](https://github.com/mbzuai-nlp/ArabicMMLU) | EXAMS | ACVA (clean) | ACVA (all) | Arabic BoolQ | Arabic ARC-C | Average |
|------------------|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
| LLaMA2-7B-chat | 13.78 | 33.40 | 13.05 | 20.99 | 21.80 | 34.92 | 23.72 | 21.09 |
| Llama2-13B-chat | 8.92 | 36.12 | 16.11 | 35.12 | 35.71 | 54.13 | 27.47 | 30.51 |
| Jais-13B-chat | 19.52 | 54.83 | 19.71 | 66.75 | 61.41 | 41.25 | 11.95 | 39.34 |
| Phoenix-7b | 29.72 | 44.74 | 31.93 | 43.80 | 41.86 | 66.70 | 33.53 | 41.75 |
| AceGPT-7B-chat | 30.69 | 36.31 | 33.73 | 53.87 | 53.07 | 60.70 | 38.05 | 43.77 |
| Mistral-7B-Instruct-v0.2 | 27.93 | 41.44 | 21.56 | 64.56 | 63.47 | 60.18 | 35.67 | 44.97 |
| AceGPT-13B-Chat | 35.59 | 52.61 | 38.72 | 70.82 | 70.21 | 66.85 | 44.20 | 54.14 |
| Jais-30B-chat-v3 | 35.68 | 62.36 | 32.24 | 73.63 | 73.66 | 76.30 | 51.02 | 57.84 |
| Jais-30B-chat-v1 | 38.12 | 59.33 | 40.45 | 74.46 | 72.41 | 73.76 | 50.94 | 58.49 |
| AceGPT-v1.5-7B-Chat | 45.77 | 56.62 | 43.69 | 69.46 | 70.86 | 72.45 | 60.49 | 59.90 |
| ChatGPT 3.5 Turbo | 46.07 | 57.72 | 45.63 | 74.45 | 76.88 | 76.12 | 60.24 | 62.44 |
| AceGPT-v1.5-13B-Chat | 47.33 | 61.70 | 48.37 | 76.90 | 76.37 | 69.33 | 63.99 | 63.42 |
| Qwen1.5-32B-Chat | 51.99 | 57.35 | 46.29 | 78.08 | 78.26 | 77.61 | 71.25 | 65.83 |
| **AceGPT-v2-8B-Chat** | 54.45 | 62.21 | 52.98 | 76.54 | 76.55 | 71.65 | 72.44 | 66.69 |
| AceGPT-v2-32B-Chat | 57.12 | 68.70 | 52.89 | 81.36 | 79.03 | 77.22 | 78.07 | 70.63 |
| AceGPT-v2-70B-Chat | 64.26 | **72.50** | 56.99 | 78.61 | 77.38 | 82.66 | 85.53 | 73.99 |
| GPT-4 | **65.04** | **72.50** | **57.76** | **84.06** | **79.43** | **85.99** | **85.67** | **75.78** |
## Samples
#### Sample1(abstract_algebra)
* input:
"\: فيما يلي أسئلة الاختيار من متعدد حول جبر تجريدي\n\nسؤال: ما هو الدرجة للامتداد الميداني الناتج من Q(sqrt(2), sqrt(3), sqrt(18)) على Q؟\nA. 0\nB. 4\nC. 2\nD. 6\nمن فضلك اختر إجابة واحدة من بين 'A، B، C، D' دون شرح. \: "
* output:
"B\n\nالشرح:\n\nالإجابة هي"
#### Sample2(business_ethics)
* input:
"\: فيما يلي أسئلة الاختيار من متعدد حول أخلاقيات الأعمال\n\nسؤال: تُصبح _______ مثل البيتكوين أكثر انتشارًا وتحمل مجموعة كبيرة من الآثار الأخلاقية المرتبطة بها، على سبيل المثال، إنها _______ وأكثر _______. ومع ذلك، تم استخدامها أيضًا للمشاركة في _______.\nA. العملات الرقمية، مكلفة، آمنة، جرائم مالية\nB. العملات التقليدية، رخيصة، غير آمنة، العطاء الخيري\nC. العملات الرقمية، رخيصة، آمنة، جرائم مالية\nD. العملات التقليدية، مكلفة، غير آمنة، العطاء الخيري\nمن فضلك اختر إجابة واحدة من بين 'A، B، C، D' دون شرح. \: "
* output:
"C\n\nالشرح:\n\nالإجابة هي"
# Reference
```
@inproceedings{liang2024alignment,
title={Alignment at Pre-training! Towards Native Alignment for Arabic {LLM}s},
author={Juhao Liang and Zhenyang Cai and Jianqing Zhu and Huang Huang and Kewei Zong and Bang An and Mosen Alharthi and Juncai He and Lian Zhang and Haizhou Li and Benyou Wang and Jinchao Xu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=woRFmNJiLp}
}
```
```
@article{zhu2024second,
title={Second Language (Arabic) Acquisition of LLMs via Progressive Vocabulary Expansion},
author={Zhu, Jianqing and Huang, Huang and Lin, Zhihang and Liang, Juhao and Tang, Zhengyang and Almubarak, Khalid and Alharthi, Mosen and An, Bang and He, Juncai and Wu, Xiangbo and Yu, Fei and Chen, Junying and Ma, Zhuoheng and Du, Yuhao and Hu, Yan and Zhang, He and Alghamdi, Emad A. and Zhang, Lian and Sun, Ruoyu and Li, Haizhou and Wang, Benyou and Xu, Jinchao},
journal={},
year={2024}
}
```