OpenNLPLab's picture
Update 15B readme
1d2dc7d
|
raw
history blame
No virus
4.04 kB
metadata
license: apache-2.0
language:
  - en
  - zh
pipeline_tag: text-generation
tags:
  - ' TransNormerLLM'

TransNormerLLM3 -- A Faster and Better LLM

Introduction

This official repository unveils the TransNormerLLM3 model along with its open-source weights for every 50 billion tokens processed during pre-training.

TransNormerLLM evolving from TransNormer, standing out as the first LLM within the linear transformer architecture. Additionally, it distinguishes itself by being the first non-Transformer LLM to exceed both traditional Transformer and other efficient Transformer models (such as, RetNet and Mamba) in terms of speed and performance.

TransNormerLLM3

  • TransNormerLLM3-15B features 14.83 billion parameters. It is structured with 42 layers, includes 40 attention heads, and has a total embedding size of 5120.
  • Titoken tokenizer is used with a total vocabulary size of about 100,000.
  • It incorporates Simple GLU for its channel mixer, GLA in the token mixer, and SRMSNorm for normalization.
  • In terms of position encoding, the first layer employs LRPE with exponential decay, whereas the subsequent layers continue with exponential decay encoding.

Pre-training Logbook

Released Weights

param token Hugging Face Model Scope Wisemodel
15B 50B 🤗 🤖 🐯

Benchmark Results

The evaluations of all models are conducted using the official settings and the lm-evaluation-harness framework.

Model P T BoolQ PIQA HS WG ARC-e ARC-c OBQA MMLU C-Eval
TransNormerLLM3-15B 15 0.05 62.08 72.52 55.55 57.14 62.12 31.14 32.40 27.50 26.18
TransNormerLLM3-15B 15 0.10

P: parameter size (billion). T: tokens (trillion). BoolQ: acc. PIQA: acc. HellaSwag: acc_norm. WinoGrande: acc. ARC-easy: acc. ARC-challenge: acc_norm. OpenBookQA: acc_norm. MMLU: 5-shot acc. C-Eval: 5-shot acc.

Acknowledgments and Citation

Acknowledgments

Our project is developed based on the following open source projects:

Citation

If you wish to cite our work, please use the following reference:

@article{qin2023scaling,
  title={Scaling transnormer to 175 billion parameters},
  author={Qin, Zhen and Li, Dong and Sun, Weigao and Sun, Weixuan and Shen, Xuyang and Han, Xiaodong and Wei, Yunshen and Lv, Baohong and Yuan, Fei and Luo, Xiao and others},
  journal={arXiv preprint arXiv:2307.14995},
  year={2023}
}

- OpenNLPLab @2024 -