Edit model card

AceGPT

AceGPT is a fully fine-tuned generative text model collection based on LlaMA2, particularly in the
Arabic language domain. This is the repository for the 13B-chat pre-trained model.

Model Developers

We are from the School of Data Science, the Chinese University of Hong Kong, Shenzhen (CUHKSZ), the Shenzhen Research Institute of Big Data (SRIBD), and the King Abdullah University of Science and Technology (KAUST).

Variations

AceGPT families come in a range of parameter sizes —— 7B and 13B, each size of model has a base category and a -chat category.

Input

Models input text only.

Output

Models output text only.

Model Evaluation Results

Experiments on Arabic Vicuna-80, Arabic AlpacaEval. Numbers are the average performance ratio of ChatGPT over three runs. We do not report the results of raw Llama-2 models since they cannot properly generate Arabic texts.

Arabic Vicuna-80 Arabic AlpacaEval
Phoenix Chen et al. (2023a) 71.92% ± 0.2% 65.62% ± 0.3%
Phoenix–multiple-langs Chen et al. (2023b) 71.67% ± 0.7% 65.36% ± 0.1%
Jais-13B-chat Sengupta et al. (2023) 75.40% ± 1.6% 74.95% ± 0.2%
AceGPT-7B-chat 94.82% ± 0.2% 93.81% ± 0.1%
AceGPT-13B-chat 100.88% ± 0.4% 97.95% ± 0.1%

You can get more detail at https://github.com/FreedomIntelligence/AceGPT/tree/main

Downloads last month
33
GGUF
Model size
13B params
Architecture
llama

4-bit

5-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.