demerzel-iv commited on
Commit
7041bcf
1 Parent(s): 40d525f

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -0
README.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - zh
6
+ pipeline_tag: text-generation
7
+ ---
8
+
9
+ # Model Card for sparsing-law-0.4b-relu
10
+
11
+ - **Paper:** [paper](https://arxiv.org/pdf/2411.02335)
12
+ - **Repository containing relevant codes:** [github](https://github.com/thunlp/SparsingLaw)
13
+
14
+ ### Introduction
15
+
16
+ The model is one of the key checkpoints used for most analyses in the paper *Sparsing Law: Towards Large Language Models with Greater Activation Sparsity*.
17
+ It is ReLU-activated and contains approximately 0.4 billion non-embedding parameters.
18
+
19
+ The model was trained from scratch using the pre-training dataset described in our paper, with the WSD (Warmup-Stable-Decay) learning rate scheduler.
20
+ Note that it is a base model derived from the last checkpoint of the stable pre-training stage, which has not undergone the decay or SFT stage.
21
+
22
+ ### Paper Abstract
23
+
24
+ **Sparsing Law: Towards Large Language Models with Greater Activation Sparsity**
25
+
26
+ Activation sparsity denotes the existence of substantial weakly-contributed elements within activation outputs that can be eliminated, benefiting many important applications concerned with large language models (LLMs), such as computation acceleration and model interpretability.
27
+ Although promoting greater activation sparsity within LLMs deserves deep studies, existing works lack comprehensive and quantitative research on the correlation between activation sparsity and potentially influential factors.
28
+ In this paper, we present a comprehensive study on the quantitative scaling properties and influential factors of the activation sparsity within decoder-only Transformer-based LLMs.
29
+ Specifically, we propose PPL- \\(p\%\\) sparsity, a precise and performance-aware activation sparsity metric that is applicable to any activation function.
30
+ Through extensive experiments, we find several important phenomena.
31
+ Firstly, different activation functions (i.e., ReLU and SiLU) exhibit comparable performance but opposite training-time sparsity trends. The activation ratio (i.e., \\(1-\mathrm{sparsity\ ratio}\\)) evolves as a convergent increasing power-law and decreasing logspace power-law with the amount of training data for SiLU-activated and ReLU-activated LLMs, respectively. These demonstrate that ReLU is more efficient as the activation function than SiLU and can leverage more training data to improve activation sparsity.
32
+ Secondly, the activation ratio linearly increases with the width-depth ratio below a certain bottleneck point, indicating the potential advantage of a deeper architecture at a fixed parameter scale.
33
+ Finally, at similar width-depth ratios, we surprisingly find that the limit value of activation sparsity varies weakly with the parameter scale, i.e., the activation patterns within LLMs are insensitive to the parameter scale. These empirical laws towards LLMs with greater activation sparsity have important implications for making LLMs more efficient and interpretable.
34
+
35
+ ### Citation
36
+
37
+ Please kindly cite using the following BibTeX:
38
+
39
+ ```bibtex
40
+ @article{luo2024sparsinglaw,
41
+ title={{Sparsing Law}: Towards Large Language Models with Greater Activation Sparsity},
42
+ author={Yuqi Luo and Chenyang Song and Xu Han and Yingfa Chen and Chaojun Xiao and Zhiyuan Liu and Maosong Sun},
43
+ year={2024},
44
+ journal={arXiv preprint arXiv:2411.02335},
45
+ url={https://arxiv.org/pdf/2411.02335.pdf}
46
+ }