Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,65 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
---
|
4 |
+
# PoSE: Efficient Context Window Extension of LLMs via Positional Skip-wise Training
|
5 |
+
|
6 |
+
<font size=6><div align='center' > <a href=https://arxiv.org/pdf/2309.10400v2.pdf>**Paper**</a> | <a href="https://huggingface.co/dwzhu">**Models**</a> | <a href="https://github.com/dwzhu-pku/PoSE)">**Code**</a> </div></font>
|
7 |
+
|
8 |
+
**Authors**: Dawei Zhu, Nan Yang, Liang Wang, Yifan Song, Wenhao Wu, Furu Wei, Sujian Li
|
9 |
+
|
10 |
+
## Abstract
|
11 |
+
|
12 |
+
Large Language Models (LLMs) are trained with a pre-defined context length, restricting their use in scenarios requiring long inputs. Previous efforts for adapting LLMs to a longer length usually requires fine-tuning with this target length (Full-length fine-tuning), suffering intensive training cost. To decouple train length from target length for efficient context window extension, we propose Positional Skip-wisE (PoSE) training that smartly simulates long inputs using a fixed context window. This is achieved by first dividing the original context window into several chunks, then designing distinct skipping bias terms to manipulate the position indices of each chunk. These bias terms and the lengths of each chunk are altered for every training example, allowing the model to adapt to all positions within target length. Experimental results show that PoSE greatly reduces memory and time overhead compared with Full-length fine-tuning, with minimal impact on performance. Leveraging this advantage, we have successfully extended the LLaMA model to 128k tokens using a 2k training context window. Furthermore, we empirically confirm that PoSE is compatible with all RoPE-based LLMs and position interpolation strategies. Notably, our method can potentially support infinite length, limited only by memory usage in inference. With ongoing progress for efficient inference, we believe PoSE can further scale the context window beyond 128k.
|
13 |
+
|
14 |
+
## Released models
|
15 |
+
|
16 |
+
### Context Extended Versions of LLaMA (originally support 2k context)
|
17 |
+
|
18 |
+
| Model | Context | Interpolation | Link |
|
19 |
+
| --- | --- | --- | --- |
|
20 |
+
| LLaMA-7B-PoSE-Linear-16k | 16,384 | Linear | [download link](https://huggingface.co/dwzhu/LLaMA-7B-PoSE-Linear-16k) |
|
21 |
+
| LLaMA-7B-PoSE-NTK-16k | 16,384 | NTK | [download link](https://huggingface.co/dwzhu/LLaMA-7B-PoSE-NTK-16k) |
|
22 |
+
| LLaMA-7B-PoSE-YaRN-16k | 16,384 | YaRN | [download link](https://huggingface.co/dwzhu/LLaMA-7B-PoSE-YaRN-16k) |
|
23 |
+
| LLaMA-7B-PoSE-Linear-96k | 98,304 | Linear | [download link](https://huggingface.co/dwzhu/LLaMA-7B-PoSE-Linear-96k) |
|
24 |
+
| LLaMA-7B-PoSE-YaRN-96k | 98,304 | YaRN | [download link](https://huggingface.co/dwzhu/LLaMA-7B-PoSE-YaRN-96k) |
|
25 |
+
| LLaMA-7B-PoSE-YaRN-128k | 131,072 | YaRN | [download link](https://huggingface.co/dwzhu/LLaMA-7B-PoSE-YaRN-128k) |
|
26 |
+
|
27 |
+
### Context Extended Versions of LLaMA2 (originally support 4k context)
|
28 |
+
|
29 |
+
| Model | Context | Interpolation | Link |
|
30 |
+
| --- | --- | --- | --- |
|
31 |
+
| LLaMA2-7B-PoSE-Linear-16k | 16,384 | Linear | [download link](https://huggingface.co/dwzhu/LLaMA2-7B-PoSE-Linear-16k) |
|
32 |
+
| LLaMA2-7B-PoSE-NTK-16k | 16,384 | NTK | [download link](https://huggingface.co/dwzhu/LLaMA2-7B-PoSE-NTK-16k) |
|
33 |
+
| LLaMA2-7B-PoSE-YaRN-16k | 16,384 | YaRN | [download link](https://huggingface.co/dwzhu/LLaMA2-7B-PoSE-YaRN-16k) |
|
34 |
+
|
35 |
+
### Context Extended Versions of Baichuan2 (originally support 4k context)
|
36 |
+
|
37 |
+
| Model | Context | Interpolation | Link |
|
38 |
+
| --- | --- | --- | --- |
|
39 |
+
| Baichuan2-7B-PoSE-Linear-16k | 16,384 | Linear | [download link](https://huggingface.co/dwzhu/Baichuan2-7B-PoSE-Linear-16k) |
|
40 |
+
| baichuan2-7B-PoSE-NTK-16k | 16,384 | NTK | [download link](https://huggingface.co/dwzhu/Baichuan2-7B-PoSE-NTK-16k) |
|
41 |
+
| baichuan2-7B-PoSE-YaRN-16k | 16,384 | YaRN | [download link](https://huggingface.co/dwzhu/Baichuan2-7B-PoSE-YaRN-16k) |
|
42 |
+
|
43 |
+
## Notice
|
44 |
+
|
45 |
+
- For YaRN interpolation, we use the revised version of YaRN in our experiments (see `pose_modeling_llama.py`), as supported by the issue [inv_freq seems not calculated right](https://github.com/jquesnelle/yarn/issues/24).
|
46 |
+
- In the configuration's `max_position_embeddings` parameter, we explicitly assign it to the scaled length. This differs slightly from the usage in the Hugging Face LLaMA document ([huggingface.co](http://huggingface.co/)). We've made this adjustment due to our positional skip-wise training, which utilizes position indices exceeding the input length. However, it's important to note that this modification does not negatively impact model performance.
|
47 |
+
|
48 |
+
## Citation
|
49 |
+
|
50 |
+
If you find this project useful in your research, please consider citing:
|
51 |
+
|
52 |
+
```
|
53 |
+
@misc{zhu2023pose,
|
54 |
+
title={PoSE: Efficient Context Window Extension of LLMs via Positional Skip-wise Training},
|
55 |
+
author={Dawei Zhu and Nan Yang and Liang Wang and Yifan Song and Wenhao Wu and Furu Wei and Sujian Li},
|
56 |
+
year={2023},
|
57 |
+
eprint={2309.10400},
|
58 |
+
archivePrefix={arXiv},
|
59 |
+
primaryClass={cs.CL}
|
60 |
+
}
|
61 |
+
```
|
62 |
+
|
63 |
+
## Acknowledgement
|
64 |
+
|
65 |
+
- This work is built upon the LLaMA, GPT-J, Baichuan as the pre-trained models.
|