text
stringlengths
84
944
page_content='POSE: E FFICIENT CONTEXT WINDOW EXTENSION OF\nLLM S VIA POSITIONAL SKIP-WISE TRAINING\nDawei Zhu∗†‡Nan Yang‡Liang Wang‡Yifan Song†Wenhao Wu†\nFuru Wei‡Sujian Li†B\n†School of Computer Science, Peking University\n‡Microsoft Corporation\nhttps://github.com/dwzhu-pku/PoSE\nABSTRACT\nLarge Language Models (LLMs) are trained with a pre-defined context length,\nrestricting their use in scenarios requiring long inputs. Previous efforts for adapting' metadata={'source': 'pdfs/paper_2.pdf', 'page': 0}
page_content='LLMs to a longer length usually requires fine-tuning with this target length ( Full-\nlength fine-tuning), suffering intensive training cost. To decouple train length\nfrom target length for efficient context window extension, we propose Positional\nSkip-wis E(PoSE) training that smartly simulates long inputs using a fixed context\nwindow. This is achieved by first dividing the original context window into several\nchunks, then designing distinct skipping bias terms to manipulate the position' metadata={'source': 'pdfs/paper_2.pdf', 'page': 0}
page_content='indices of each chunk. These bias terms and the lengths of each chunk are altered\nfor every training example, allowing the model to adapt to all positions within\ntarget length. Experimental results show that PoSE greatly reduces memory and\ntime overhead compared with Full-length fine-tuning, with minimal impact on per-\nformance. Leveraging this advantage, we have successfully extended the LLaMA\nmodel to 128k tokens using a 2k training context window. Furthermore, we empir-' metadata={'source': 'pdfs/paper_2.pdf', 'page': 0}
page_content='ically confirm that PoSE is compatible with all RoPE-based LLMs and position\ninterpolation strategies. Notably, our method can potentially support infinite length,\nlimited only by memory usage in inference. With ongoing progress for efficient\ninference, we believe PoSE can further scale the context window beyond 128k.\n1 I NTRODUCTION\nLarge Language Models (LLMs) have revolutionized language modeling and demonstrated impres-' metadata={'source': 'pdfs/paper_2.pdf', 'page': 0}
page_content='sive abilities to perform various tasks (Brown et al., 2020). However, even with their remarkable\ncapacity, these LLMs remain restricted by pre-defined context window sizes, suffering from notable\nperformance decline when input tokens exceeds these limits. Nevertheless, numerous application\nscenarios demand extremely long input sequences, including long document summarization (Huang\net al., 2021), in-context learning with numerous examples (Li et al., 2023), and long document' metadata={'source': 'pdfs/paper_2.pdf', 'page': 0}
page_content='retrieval (Zhou et al., 2022), etc. This naturally poses a significant challenge of context window\nextension : Extending the context window of a pre-trained LLM to accommodate longer sequences.\nNaively fine-tuning LLMs on inputs of target length for window extension has received limited success\ndue to the large disruption introduced by new position indices (Chen et al., 2023a). Addressing\nthis, Position Interpolation (Chen et al., 2023a; kaiokendev, 2023; Peng et al., 2023) propose to' metadata={'source': 'pdfs/paper_2.pdf', 'page': 0}
page_content='down-scale the position indices to match the original window size, yielding improved results for\ncontext extension. However, these methods still rely on Full-length fine-tuning, i.e., fine-tuning with\ncontext of target length, which is memory and time-intensive due to the computational complexity\nthat increases quadratically with input length. For example, Chen et al. (2023a) use 32 A100 GPUs\nto extend LLaMA models from 2k to 8k context, and 128 A100 GPUs for even larger context. These' metadata={'source': 'pdfs/paper_2.pdf', 'page': 0}
page_content='computational cost has made it impossible to extend context window to extreme lengths.\nIn this paper, we introduce Positional Skip-wis E(PoSE) fine-tuning to decouple the fine-tuning\nlength from the target context window length, unleashing the possibility of efficiently extending\n∗Work done during internship at MSRA.\n1arXiv:2309.10400v2 [cs.CL] 10 Oct 2023' metadata={'source': 'pdfs/paper_2.pdf', 'page': 0}
page_content='0 511 ... 6560 8095 ...\n0 1023 ... 3552 4575 ...... 0 8191 1Full-length: 8192 tok ens\nPoSE: 2048 tok ens\nskip\nskip target / original context siz e = 8192 / 2048\n1\n2AssetsTrain Example Relative Positions\n# 1 [1,1535] ∪ [6049,8095]\n# 2 [1,1024] ∪ [2529,4575]\n... ...Figure 1: Position indices of Full-length fine-tuning v.s. PoSE fine-tuning for extending the context\nwindow size from 2,048 to 8,192. At each iteration, the former directly takes 8,192 tokens for' metadata={'source': 'pdfs/paper_2.pdf', 'page': 1}
page_content='fine-tuning, while PoSE manipulates the position indices of 2,048 tokens to simulate longer inputs.\nFor example, we partition the original context window of 2,048 tokens into two chunks, and adjust\nthe position indices of the second chunk by adding a distinct skipping bias term. These bias terms, as\nwell as the length of each chunk, are altered for each training example, so that the model can adapt to\nall relative positions of the target context window through fine-tuning.' metadata={'source': 'pdfs/paper_2.pdf', 'page': 1}
page_content='context window to an extreme size. The key idea of PoSE is to simulate long inputs by manipulating\nposition indices within a fixed context window. As depicted in Figure 1, we partition the original\ncontext window into several chunks, and adjust the position indices of each chunk by adding a distinct\nskipping bias term. These bias terms, as well as the length of each chunk, are altered for each training' metadata={'source': 'pdfs/paper_2.pdf', 'page': 1}
page_content='example, so that the model can adapt to all positions (including both absolute and relative) within\nthe target context window through fine-tuning. Meanwhile, by maintaining continuous position\nindices within each chunk, PoSE bears a close resemblance to pre-training. As a result, the model’s\npre-trained capacity for language modeling and comprehension is retained to the greatest degree.\nThe advantages of our PoSE are threefold: 1) Memory and Time Efficiency: By only requiring' metadata={'source': 'pdfs/paper_2.pdf', 'page': 1}
page_content='the original context size for fine-tuning, PoSE circumvents the quadratic increase in computational\ncomplexity with respect to target length during the fine-tuning stage, thereby significantly reducing\nmemory and time overhead. 2) Potential for Extremely-Long Context: We manage to extend the\ncontext window of LLaMA (Touvron et al., 2023a) by up to 64 times (2k − →128k, k=1,024) while pre-\nserving decent ability of language modeling and understanding. 3) Compatible with all RoPE-based' metadata={'source': 'pdfs/paper_2.pdf', 'page': 1}
page_content='LLMs and PI strategies: The effectiveness of PoSE has been empirically validated across several\nrepresentative RoPE-based LLMs, including LLaMA, LLaMA2 (Touvron et al., 2023b), GPT-J (Wang\n& Komatsuzaki, 2021), and Baichuan (Baichuan, 2023). Additionally, PoSE has been demonstrated\nto be compatible with a variety of position interpolation methods, including Linear (Chen et al.,\n2023a), NTK (Peng & Quesnelle, 2023), and YaRN (Peng et al., 2023) interpolation.' metadata={'source': 'pdfs/paper_2.pdf', 'page': 1}
page_content='Notably, by decoupling the fine-tuning and target length, PoSE can theoretically extend context\nwindow to an infinite length. The only constraint is the memory usage during the inference phase.\nHopefully, with the continuous advancements in efficient inference techniques, including Flash\nAttention (Dao et al., 2022; Dao, 2023), xFormers (Lefaudeux et al., 2022), vLLM (Kwon et al.,\n2023), etc, we believe PoSE can promisingly push the context window size to a even larger scale.\n2 R ELATED WORK' metadata={'source': 'pdfs/paper_2.pdf', 'page': 1}
page_content='Training Length-Extrapolatable Models. Length extrapolation aims to ensure that the model\ncontinues to perform well, even when the number of input tokens during inference exceeds the size\nof the context window on which the model is trained (Press et al., 2021). To this end, a series of\npositional embedding schemes have been proposed, including ALibi (Press et al., 2021), xPos (Sun\net al., 2023), NoPos (Haviv et al., 2022), etc.' metadata={'source': 'pdfs/paper_2.pdf', 'page': 1}
page_content='Similar to our work, Ruoss et al. (2023) also attempted to simulate longer sequences during\ntraining time to mitigate out-of-distribution lengths. They proposed randomized positional encoding\n(RandPos), which randomly selected an ordered subset of position indices from longer sequences.\n2' metadata={'source': 'pdfs/paper_2.pdf', 'page': 1}
page_content='Our proposed method, PoSE, diverges from their approach in several key aspects: First, RandPos\nis a positional embedding scheme designed for pre-training encoder-only models from scratch to\nenhance length generalization ability. In contrast, PoSE is a fine-tuning method that aims to efficiently\nextend the context window of pre-trained LLMs, the majority of which follow a decoder-only\narchitecture. Second, in RandPos, the position indices between adjacent tokens are not continuous.' metadata={'source': 'pdfs/paper_2.pdf', 'page': 2}
page_content='However, in PoSE, the position indices within each chunk are intentionally made continuous to closely\nresemble the pre-training phase, therefore reducing the risk of disrupting the language modeling and\nunderstanding abilities learned during the pre-training stage.\nFine-tuning LLMs for Longer Context. Differing from length extrapolation, which primarily\ninvolves training a model from scratch to support lengths exceeding those it was initially trained for,' metadata={'source': 'pdfs/paper_2.pdf', 'page': 2}
page_content='context window extension focuses on extending the context window of a pre-trained LLM. Directly\nfine-tuning an existing LLM with a longer context window has been shown to progress slowly (Chen\net al., 2023a). To expedite and stabilize training, Chen et al. (2023a) first down-scaled position\nindices to match original context size through Linear Position Interpolation. Subsequently, a range\nof Positional Interpolation (PI) strategies have been introduced, including NTK (Peng & Quesnelle,' metadata={'source': 'pdfs/paper_2.pdf', 'page': 2}
page_content='2023) and YaRN (Peng et al., 2023). More recently, LongLora (Chen et al., 2023b) propose shift short\nattention to approximate full attention. However, all these methods require Full-length fine-tuning,\nsuffering computational cost that grows with target context size. By contrast, our method managed to\ndecouple train / target length, requiring only the original context size for fine-tuning.\nMemory Transformers. An alternative strategy for managing extremely long input sequences' metadata={'source': 'pdfs/paper_2.pdf', 'page': 2}
page_content='involves the adoption of memory mechanisms. Typically, there are two lines of research for utilizing\nmemory: the recurrence-based approach (Dai et al., 2019; Bulatov et al., 2022) and the retrieval-based\napproach (Wu et al., 2022; Wang et al., 2023; Tworkowski et al., 2023). The recurrence-based\napproach involves segmenting long inputs and reusing the hidden states obtained from preceding\nsegments to serve as memory for the current segment. Nonetheless, this architecture is hindered' metadata={'source': 'pdfs/paper_2.pdf', 'page': 2}
page_content='by information loss and limited capacity for random access. On the other hand, the retrieval-based\nparadigm entails encoding prior sequences as (key, value) pairs and utilizing a memory retriever\nand reader to extract previously encoded information. The primary limitation of this approach is\nthe absence of interaction between discrete memory segments. More recently, Mohtashami & Jaggi\n(2023) introduced landmark attention, which facilitates random access to any chunk of the input by' metadata={'source': 'pdfs/paper_2.pdf', 'page': 2}
page_content='introducing landmark tokens. In contrast, our method achieves full access to the entire input without\nany modifications to the attention mechanism.\n3 M ETHODOLOGY\n3.1 P RELIMINARIES\nRotary Position Embedding (RoPE). The use of RoPE (Su et al., 2021) has become pervasive\nin contemporary LLMs, including LLaMA (Touvron et al., 2023a), GPT-J (Wang & Komatsuzaki,\n2021), etc. It encodes position information of tokens with a rotation matrix that naturally incorporates' metadata={'source': 'pdfs/paper_2.pdf', 'page': 2}
page_content='explicit relative position dependency. To elucidate, given a hidden vector h= [h0, h1, ..., h d−1],\nwhere dis the hidden dimension, and a position index m, RoPE operates as follows:\nf(h, m) =\uf8eb\n\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8edh0\nh1\nh2\nh3\n...\nhd−2\nhd−1\uf8f6\n\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f8⊗\uf8eb\n\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8edcosmθ0\ncosmθ0\ncosmθ1\ncosmθ1\n...\ncosmθd/2−1\ncosmθd/2−1\uf8f6\n\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f8+\uf8eb\n\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8ed−h1\nh0\n−h3\nh2\n...\n−hd−1\nhd−2\uf8f6\n\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f8⊗\uf8eb\n\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8edsinmθ0\nsinmθ0\nsinmθ1\nsinmθ1\n...\nsinmθd/2−1\nsinmθd/2−1\uf8f6\n\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f8(1)' metadata={'source': 'pdfs/paper_2.pdf', 'page': 2}
page_content='where θj= 10000−2j/d, j∈ {0,1, ..., d/ 2−1}. Unlike previous absolute position encodings that\nare directly applied to the input vector x, RoPE is employed on the query and key vectors at each\nlayer. Considering a query vector qat position mand a key vector kat position n, the attention score\n3' metadata={'source': 'pdfs/paper_2.pdf', 'page': 2}
page_content='a(q,k)is defined as follows:\na(q,k) =< f(q, m), f(k, n)>\n=d/2−1X\nj=0[(q2jk2j+q2j+1k2j+1) cos ( m−n)θj+ (q2jk2j+1−q2j+1k2j) sin ( m−n)θj]\n:=g(q,k,θ, m−n) (2)\nHence, RoPE encodes position information in a relative manner, as the attention score depends on the\nrelative distances between positions rather than their absolute position values.\nProblem Formulation. Given a Large Language Model pre-trained with a context window size of' metadata={'source': 'pdfs/paper_2.pdf', 'page': 3}
page_content='Lc, our objective is to extend this context size to a target length Lt, so that the model maintains good\nperformance when processing input sequences containing a maximum of Lttokens.\nPosition Interpolation (PI). In contrast to directly extending the position indices to Lt−1when\ndealing with an input text x={x0, x1, ..., x Lt}, position interpolation down-scales the position\nindices to align with the original context window size Lc. This approach effectively mitigates the' metadata={'source': 'pdfs/paper_2.pdf', 'page': 3}
page_content='risk of encountering extreme values and has been empirically demonstrated to enhance stability\nduring fine-tuning. Various interpolation strategies have been proposed, with α=Lt/Lcdenoting\nthe scaling factor:\n•Linear Interpolation. As described by Chen et al. (2023a) and kaiokendev (2023), linear interpola-\ntion involves a proportional down-scaling of the position index mtom/α . Consequently, the atten-\ntion score between a query qat position mand a key kat position nbecomes g(q,k,θ,(m−n)/α),' metadata={'source': 'pdfs/paper_2.pdf', 'page': 3}
page_content='as defined in Equation 2. Theoretical analysis has substantiated that the interpolated attention score\nexhibits significantly greater stability compared to the extrapolated counterpart.\n•Neural Tangent Kernel (NTK) Interpolation. In contrast to linear interpolation, NTK Interpola-\ntion alters the base of RoPE, effectively modifying the rotational "speed" of each dimension of\nRoPE (Peng & Quesnelle, 2023). Specifically, the original θj= 10000−2j/d, j∈ {0,1, ..., d/ 2−1}' metadata={'source': 'pdfs/paper_2.pdf', 'page': 3}
page_content='in RoPE is transformed into θ′\nj= (10000 λ)−2j/d, where λ=αd/d−2. It is noteworthy that the\nvalue of λis chosen to ensure that mθ′\nd/2−1= (m/α)θd/2−1.\n•YaRN Interpolation. Different from Linear and NTK interpolation that treat each dimension of\nRoPE equally, YaRN (Peng et al., 2023) employs a ramp function to combine Linear and NTK\ninterpolation at varying proportions across different dimensions. Simultaneously, it introduces a' metadata={'source': 'pdfs/paper_2.pdf', 'page': 3}
page_content='temperature factor to mitigate distribution shift of attention matrix caused by long inputs.\n3.2 P ROPOSED APPROACH : POSITIONAL SKIP-WISE TRAINING (POSE)\nAlthough position interpolation effectively addresses out-of-distribution position indices, extending\nto an extreme length by fine-tuning on context window of this size remains impractical, owing to the\nquadratic growth in computational complexity of attention as sequence length increases. Instead, we' metadata={'source': 'pdfs/paper_2.pdf', 'page': 3}
page_content='explore to train within the original context window Lcand achieve context window extension via\nmanipulating position indices to simulate longer inputs.\nThere are two designing desiderata for this endeavor: First, to avoid out-of-distribution positions\nduring inference, the relative distance of manipulated position indices should comprehensively cover\nthe range of {1, . . . , L t−1}. Second, fine-tuning with the manipulated position indices should not' metadata={'source': 'pdfs/paper_2.pdf', 'page': 3}
page_content='harm the original abilities of LLMs, so the structure of manipulated position indices should closely\nadhere to the original structure to the greatest extent possible.\nInitially, we randomly divide the original context window LcintoNchunks c0, c1, . . . , c N−1, each\nwith lengths l0, l1, . . . , l N−1, wherePN−1\ni=0li=Lc. We introduce the starting index stifor each\nchunk ci, which facilitates the formulation of its position indices as follows:\nPos(ci) ={sti, sti+ 1, . . . , st i+li−1}, st i=i−1X' metadata={'source': 'pdfs/paper_2.pdf', 'page': 3}
page_content='j=0lj (3)\nSubsequently, we employ the discrete uniform distribution U(S)to sample a skipping bias term\nui∼ U({ui−1, . . . , L t−Lc})for each chunk ci. This bias term is applied to the corresponding\n4' metadata={'source': 'pdfs/paper_2.pdf', 'page': 3}
page_content='chunk to transform the original position indices into:\nPoSE (ci) ={ui+sti, ui+sti+ 1, . . . , u i+sti+li−1} (4)\nNote that the constraint of ui≥ui−1is applied to prevent position index overlaps between chunks.\nIntuitively, the introduction of skipping bias terms exposes model to a more diverse range of relative\npositions. To achieve comprehensive coverage of the target context window, we re-sample both the' metadata={'source': 'pdfs/paper_2.pdf', 'page': 4}
page_content='length and skipping bias term of every chunk for each training example. Moreover, the continuity\nof position indices within each chunk closely resembles the structure employed during pre-training.\nConsequently, fine-tuning the model on these new position indices for language modeling does not\ncompromise its original capabilities.\nConcerning the text contained within each chunk, a similar procedure is followed to select continuous' metadata={'source': 'pdfs/paper_2.pdf', 'page': 4}
page_content='spans of tokens from the input text x={x0, x1, ..., x Lx}. To elaborate, we begin by sampling a bias\ntermvi∼ U({vi−1, . . . , L x−Lc)followed by assigning the content of chunk cias below:\nci=x[vi+sti:vi+sti+li] (5)\nNotably, we have also explored other assigning strategy of vi, including scenarios where vi= 0,\nwhich results in genuinely continuous content for the chunks, or vi=ui, aligning the manipulated' metadata={'source': 'pdfs/paper_2.pdf', 'page': 4}
page_content='position indices with actual positions in the original text. However, we observe that these variations\nhave relatively little impact on the outcomes of fine-tuning.\nAfter position indices and content for each chunk are settled, we perform position interpolation for\nstabilized fine-tuning. For simplicity, We set the initial bias terms u0andv0to 0. In terms of chunk\nnumber N, we view it as an trade-off between efficiency and effectiveness. Because an increase in the' metadata={'source': 'pdfs/paper_2.pdf', 'page': 4}
page_content='number of chunks will further deviates from the position structure of pre-training, which may harm\nthe ability acquired during pre-training. Hence, in this paper we set Nto 2, exposing the models to a\nwider range of relative positions, while adhering as close to the original position structure as possible.\n(See Appendxi A and B for further discussion of viandN.)\n4 E XPERIMENTS\nIn this section, we conduct experiments to verify the effectiveness of PoSE for context window' metadata={'source': 'pdfs/paper_2.pdf', 'page': 4}
page_content='extension. Our method demonstrates impressive results on context lengths of both 16k and 32k for\nlanguage modeling as well as passkey retrieval. Other advantages of PoSE are discussed in Section 5.\n4.1 S ETUPS\nTraining Procedure. For each setting in the main experiments, we train LLaMA-7B with the next\ntoken prediction objective. This training process comprises 1,000 steps, employing a global batch size' metadata={'source': 'pdfs/paper_2.pdf', 'page': 4}
page_content='of 64 on 8 V100 GPUs using Deepspeed ZeRO stage 3 (Rajbhandari et al., 2020). The fine-tuning\ndataset is sourced from The Pile (Gao et al., 2020), with a minimum length requirement of 2,048\ntokens. Our default choice for interpolation strategies is linear interpolation. For evaluation, we use a\nsingle A100 GPU. Flash Attention V2 (Dao, 2023) is applied, making it possible to evaluate long\ndocuments of up to 128k tokens (k=1,024)' metadata={'source': 'pdfs/paper_2.pdf', 'page': 4}
page_content='Evaluation Tasks and Datasets. We examine the ability of long text modeling on two tasks:\nlanguage modeling and passkey retrieval. The language modeling task is a fundamental task that\nreflects the overall capability of a model in handling long text. Passkey retrieval, on the other hand,\ncan effectively measure the maximum distance that a token can attend to during the inference stage.\nWe evaluate language modeling on GovReport (Huang et al., 2021) and Proof-pile (Zhangir et al.,' metadata={'source': 'pdfs/paper_2.pdf', 'page': 4}
page_content='2022) datasets. For passkey retrieval, we follow Mohtashami & Jaggi (2023) to construct synthetic\nprompts for evaluation.\nBaseline Methods. We compare our PoSE training method against following baselines:\n•Full-length fine-tuning takes input tokens of target length for fine-tuning. For this method, compu-\ntation complexity scales quadratically with target context window size.\n5' metadata={'source': 'pdfs/paper_2.pdf', 'page': 4}
page_content='Table 1: Perplexity of models trained with different methods. We conduct evaluation on the GovReport\nand Proof-pile datasets, varying evaluation context window size from 2k to 32k. Our PoSE, with\na fixed training window size of 2k, effectively extended to a target context size of 16k / 32k for\ninference while receiving only minimal performance degradation compared to Full-length.\nMethodContext size GovReport Proof-pile\nTrain / Target 2k 4k 8k 16k 32k 2k 4k 8k 16k 32k' metadata={'source': 'pdfs/paper_2.pdf', 'page': 5}
page_content='None - / - 4.74 >103>103>103>1032.83 >103>103>103>103\nFull-length 16k / 16k 4.87 4.70 4.61 4.59 - 2.93 2.71 2.58 2.53 -\nRandPos2k / 16k 11.63 11.17 11.54 15.16 - 7.26 6.83 6.76 7.73 -\n2k / 32k 93.43 95.85 91.79 93.22 97.57 60.74 63.54 60.56 63.15 66.47\nPoSE (Ours)2k / 16k 4.84 4.68 4.60 4.60 - 2.95 2.74 2.61 2.60 -\n2k / 32k 4.91 4.76 4.68 4.64 4.66 3.01 2.78 2.66 2.60 2.59\n•RandPos (Ruoss et al., 2023) is initially designed to train an encoder-only model from scratch' metadata={'source': 'pdfs/paper_2.pdf', 'page': 5}
page_content='for length extrapolation. However, since it shares similar idea of simulating longer sequences via\nchanging position indices, we include it for a comprehensive comparison. Given the original /\ntarget context window length Lc/Lt, it uniquely samples Lcpositions from the set {0, ..., L t−1},\narranges them in ascending order, and employs them as new position indices for training.\n4.2 L ANGUAGE MODELING' metadata={'source': 'pdfs/paper_2.pdf', 'page': 5}
page_content='First, we investigate the impacts of different fine-tuning methods on long sequence language modeling\nusing the GovReport and Proof-pile datasets. GovReport is a summarization dataset comprising\n19,402 reports published by the Congress and the U.S. Government, with an average document length\nof 7,866 tokens. We randomly select 50 reports containing more than 32,768 tokens for evaluation.\nSimilarly, Proof-pile is a 13GB mathematical dataset of long mathematical documents. In line with' metadata={'source': 'pdfs/paper_2.pdf', 'page': 5}
page_content='the approach taken for GovReport, we choose 50 samples from Proof-pile that contain more than\n32,768 tokens for evaluation.\nTable 1 presents the results of scaling to 16k and 32k using Full-length training, RandPos, and PoSE.\nFor each scaled model, as well as the non-fine-tuned LLaMA model (None), we report perplexity\nscores at various evaluation context window sizes, ranging from 2k to 32k, employing the sliding' metadata={'source': 'pdfs/paper_2.pdf', 'page': 5}
page_content='window approach proposed by Press et al. (2021). For evaluation efficiency, we set the stride of the\nsliding window to 1,024.\nFirst, we observe an overall decreasing trend of perplexity for both models scaled to 16k and 32k via\nPoSE as evaluation context window size increases, proving their abilities to leverage longer context.\nSecond, with significantly shorter context length during fine-tuning, our PoSE achieves comparable' metadata={'source': 'pdfs/paper_2.pdf', 'page': 5}
page_content='results with Full-length, consolidating its effectiveness. Third, our method achieves much stronger\nresults than RandPos. We suppose it is because our manipulated position indices closely resembles\nthat of pre-training, hereby preserving the pre-trained language modeling ability to the greatest extent.\nWe also notice that all the scaling methods suffers certain performance degradation as the supported' metadata={'source': 'pdfs/paper_2.pdf', 'page': 5}
page_content='context length increases. We perceive this as a trade-off between the quantity of tokens the model can\nprocess and the level of granularity in the attention the model can pay to each individual token.\n4.3 P ASSKEY RETRIEVAL FOR EFFECTIVE CONTEXT WINDOW\nTo effectively measure the maximum distance that a token can attend to during the inference stage,\nwe adopt the passkey retrieval test proposed by Mohtashami & Jaggi (2023). In this test, models are' metadata={'source': 'pdfs/paper_2.pdf', 'page': 5}
page_content='tasked with recovering a random passkey hidden within a lengthy document. Prompt template used\nfor this task is presented in Figure 2a.\nSpecifically, we compare the non-fine-tuned LLaMA model (denoted as None ) with the PoSE-\nextended version for 16k and 32k context window size. For each model, we vary the prompt length\nfrom 2k to 32k. In each case, we conduct the passkey retrieval test for 50 times, with a random passkey\n6' metadata={'source': 'pdfs/paper_2.pdf', 'page': 5}
page_content='There is an important info hidden inside a lot of irrelevant \ntext. Find it and memorize them. I will quiz you about the \nimportant information there.\nThe grass is green. The sky is blue. The sun is yellow. Here \nwe go. There and back again. (repeat x times)\nThe pass key is 81501 . Remember it. 81501 is the pass key.\nThe grass is green. The sky is blue. The sun is yellow. Here \nwe go. There and back again. (repeat y times)\nWhat is the pass key? The pass key is(a)' metadata={'source': 'pdfs/paper_2.pdf', 'page': 6}
page_content='2k 8k 16k 24k 32k / Tokens020406080100Accuracy (%)None\nPoSE-16k\nPoSE-32k (b)\nFigure 2: (a) Prompt template used for passkey retrieval; (b) retrieval accuracy for the non-fine-tuned\nLLaMA model ( None ), and the PoSE-extended counterparts for 16k / 32k window size. Both PoSE-\nextended models maintain a high retrieval accuracy ( ≥90%) within their respective context window.\nof 5 digits generated and placed at a random position inside the prompt for each trial. Figure 2b' metadata={'source': 'pdfs/paper_2.pdf', 'page': 6}
page_content='illustrates the results. For the none-fine-tuned LLaMA model ( None ), the retrieval accuracy rapidly\ndrops to 0 when the prompt length exceeds 2k. In contrast, both PoSE-extended models managed\nto maintain a high retrieval accuracy ( ≥90%) within their respective target context window. This\nindicates that models trained via PoSE genuinely possess the capability to attend to all tokens within\nthe extended context windows.\n5 A NALYSIS' metadata={'source': 'pdfs/paper_2.pdf', 'page': 6}
page_content='In this section, we analyze the advantages of PoSE, including 1) memory and time efficiency;\n2) compatibility with all RoPE-based LLMs and diverse interpolation strategies; 3) potential for\nextremely-long context. In Section 5.4, We also verify that model performance within the original\ncontext window only receives minimal degradation.\n5.1 M EMORY AND TIMEEFFICIENCY\nWe study the memory and time efficiency of PoSE compared with Full-length fine-tuning. For each' metadata={'source': 'pdfs/paper_2.pdf', 'page': 6}
page_content='method, we scale LLaMA-7B to 4k / 8k / 16k through 1,000 training steps with a global batch size of\n16 on 8 V100 GPUs. Experiment results are demonstrated in Figure 3. Figure 3(a) and (b) respectively\nillustrates memory and time consumption for 1,000 steps of Full-length versus PoSE. While the\ntraining cost of Full-length increases rapidly with target window length, PoSE only requires a fixed\nquota of memory and time for context extension, which is significantly lower. Figure 3(c) further' metadata={'source': 'pdfs/paper_2.pdf', 'page': 6}
page_content='compares model perplexity of the two training methods at different steps on GovReport. Notably,\nboth models achieve relatively low perplexity levels within the initial 100 training steps. Moreover,\nat each step, our proposed PoSE, while requiring only a training context size of 2k tokens, exhibits\nvery close language modeling ability to Full-length fine-tuning, which requires an extended training\ncontext of 16k. We did not experiment with context window of 32k or above, because V100 machines' metadata={'source': 'pdfs/paper_2.pdf', 'page': 6}
page_content='cannot afford full fine-tuning of these lengths. But it can be expected that the overhead ration between\nFull-leng and PoSE will become more exaggerated as target length increases. Consequently, we can\nconfidently assert that our proposed approach is both memory and time-efficient.\n5.2 COMPATIBILITY WITH ROPE-B ASED LLM S AND DIVERSE INTERPOLATION STRATEGIES\nWe also delve into the effectiveness of PoSE when applied to different RoPE-based LLMs, as' metadata={'source': 'pdfs/paper_2.pdf', 'page': 6}
page_content='well as various interpolation strategies. Specifically, we employ PoSE on four distinct models:\nLLaMA-7B, LLaMA2-7B, GPT-J-6B, and Baichuan2-7B, all of which encompasses RoPE in their\narchitectures. The original context size of LLaMA-7B and GPT-J-6B is 2k, while that of LLaMA2-7B\nand Baichuan2-7B is 4k. For each model, we examine the integration with Linear, NTK, and YaRN\ninterpolation, as well as the non-fine-tuned original version for comparative purposes. The same' metadata={'source': 'pdfs/paper_2.pdf', 'page': 6}
page_content='GovReport dataset as described in Section 4.2 is utilized. The test set is truncated to the first 1k to\n7' metadata={'source': 'pdfs/paper_2.pdf', 'page': 6}
page_content='4k 8k 16k\n(a)101520253035\nOOMMemory (GB)\nFull-length\nPoSE\n4k 8k 16k\n(b)051015Time (h)\nFull-length\nPoSE\n10 30 100 300 1000 / Steps\n(c)4.75.15.45.7\nPerplexity\nFull-length\nPoSEFigure 3: Full-length fine-tuning v.s. PoSE in terms of (a) Memory and (b) Time consumption\nfor extending LLaMA-7B from 2k to 4k / 8k / 16k context, each finishing 1000 training steps.\n(c) Perplexity of both 16k-context models at every training steps. We show that PoSE takes a' metadata={'source': 'pdfs/paper_2.pdf', 'page': 7}
page_content='constantly reduced time and memory for context extension, while attaining a comparable level of\nPPL performance with Full-length fine-tuning at each step.\n1k 2k 4k 8k 16k4.85.05.25.4Perplexity\nLLaMA-7B\n1k 2k 4k 8k 16k4.64.85.05.2\nLLaMA2-7B\n1k 2k 4k 8k 16k7.58.08.59.09.5\nGPT-J-6B\n1k 2k 4k 8k 16k5.86.36.87.3\nBaichuan2-7B\nOriginal PoSE-Linear PoSE-NTK PoSE-YaRN\nFigure 4: Perplexity of LLaMA-7B, LLaMA2-7B, GPT-J-6B, Baichuan2-7B extended to 16k via' metadata={'source': 'pdfs/paper_2.pdf', 'page': 7}
page_content='PoSE with Linear / NTK / YaRN interpolation, along with the non-fine-tuned Original model. The\nconsistently low perplexity observed across all nine combinations serves as an indication of the\neffectiveness of our method across RoPE-based LLMs and diverse interpolation strategies.\n16k tokens for plotting the perplexity curve, as depicted in Figure 4. First, it is evident that PoSE is\neffective across all four models and three interpolation strategies, as evidenced by the low perplexities' metadata={'source': 'pdfs/paper_2.pdf', 'page': 7}
page_content='achieved by all 12 combinations in comparison to the non-fine-tuned original model. Second, we\nobserve that NTK and YaRN interpolation generally yields superior results compared to Linear\ninterpolation. However, it is noteworthy that NTK exhibits a significant increase in perplexity after\na certain turning point, which occurs prior to reaching the target context length. This behavior is\nconsistent with previous findings, indicating that for a given scaling factor α, NTK cannot genuinely' metadata={'source': 'pdfs/paper_2.pdf', 'page': 7}
page_content='expand the context window by αtimes (Peng & Quesnelle, 2023; Quesnelle, 2023; Peng et al., 2023).\n5.3 P OTENTIAL FOR EXTREMELY -LONG CONTEXT\nBecause PoSE only takes a fixed context window at training stage to extend to target context window\nsize, we can promisingly extend LLMs to support infinite input lengths using this method. In this\nsection, we extend context window size to 96k and 128k to explore PoSE’s potential for extreme' metadata={'source': 'pdfs/paper_2.pdf', 'page': 7}
page_content='context window extension. Given the need to evaluate on extremely long documents, we have opted to\nemploy two book datasets, namely Books3 (Presser, 2020) and Gutenberg (PG-19) (Rae et al., 2019).\nBoth of these datasets consist of extensive collections of literary works, rendering them well-suited\nsubjects for the assessment of long-range modeling. For our evaluation, we randomly selected 20\nbooks from each dataset, each containing more than 128k tokens.' metadata={'source': 'pdfs/paper_2.pdf', 'page': 7}
page_content='Fine-tuning LLaMA models using PoSE, we experimented with Linear / NTK / YaRN interpolation\nfor both the 96k and 128k models. To calculate perplexity, we adhere to the sliding window strategy\nadopted in Section 4.2, with an increased sliding window step of 16k to enhance evaluation efficiency.\n8' metadata={'source': 'pdfs/paper_2.pdf', 'page': 7}
page_content='Table 2: Perplexity of models extended to extreme context size via PoSE on PG-19 and Books3. We\nshow that our training method can effectively extend context window size to 128k when combined\nwith YaRN interpolation.\nModelGutenberg (PG-19) Books3\n32k 64k 96k 128k 32k 64k 96k 128k\nPoSE-Linear-96k 10.18 11.11 13.57 - 9.98 10.90 13.42 -\nPoSE-NTK-96k 7.98 20.39 38.73 - 8.29 20.82 40.39 -\nPoSE-YaRN-96k 8.31 8.65 9.36 - 8.90 9.40 10.38 -\nPoSE-Linear-128k 16.90 22.47 26.77 31.18 26.20 43.62 57.08 70.87' metadata={'source': 'pdfs/paper_2.pdf', 'page': 8}
page_content='PoSE-NTK-128k 8.04 14.84 29.48 34.80 8.34 16.04 31.42 37.00\nPoSE-YaRN-128k 9.32 10.36 10.77 11.33 10.56 12.30 13.07 13.81\nTable 3: Performance of PoSE-extended LLaMA model on standard benchmarks in comparison with\nFull-length fine-tuning and the original LLaMA. We show that PoSE-extended models exhibit only\nmarginal performance degradation compared with Full-length fine-tuning and the original version.\nModelZero-Shot Few-Shot\nBoolQ PIQA WinoGrande TruthfulQA ARC-C HellaSwag' metadata={'source': 'pdfs/paper_2.pdf', 'page': 8}
page_content='LLaMA 75.11 78.67 69.85 34.08 51.19 77.75\nFull-Linear-16k 70.95 77.64 69.06 31.89 48.55 74.19\nFull-NTK-16k 75.80 78.08 68.98 33.83 48.81 76.57\nFull-YaRN-16k 73.88 77.64 68.15 34.12 50.60 77.18\nPoSE-Linear-16k 74.50 78.13 68.59 32.05 48.29 75.56\nPoSE-NTK-16k 74.28 78.24 68.90 33.89 49.83 76.82\nPoSE-YaRN-16k 74.28 78.02 69.06 34.00 49.23 77.04\nPoSE-Linear-128k 67.71 76.22 67.56 36.16 39.93 66.04\nPoSE-NTK-128k 75.35 78.18 68.98 32.71 49.66 76.19\nPoSE-YaRN-128k 73.61 77.80 70.01 34.47 48.46 75.54' metadata={'source': 'pdfs/paper_2.pdf', 'page': 8}
page_content='The outcomes of these experiments are detailed in Table 2. It is observe that, PoSE successfully\nextends the model’s context window to 96k when coupled with Linear interpolation, and further\nextends the context window to 128k when paired with YaRN. These promising results consolidates\nthe effectiveness of PoSE for extreme context window extension.\n5.4 E VALUATION OF CAPABILITY ON ORIGINAL CONTEXT WINDOW' metadata={'source': 'pdfs/paper_2.pdf', 'page': 8}
page_content='In this section, we examine the capabilities of the PoSE-extended models on the original context win-\ndow using standard benchmarks. We combine the Hugging Face Open LLM Leaderboard (Face, 2023)\nwith a subset of LLaMA benchmarks to assess zero-shot and few-shot performance. For zero-shot\nevaluation, we employ BoolQ (Clark et al., 2019), PIQA (Bisk et al., 2020), WinoGrande (Keisuke\net al., 2019), and TruthfulQA (Lin et al., 2022). For few-shot evaluation, we utilize 25-shot ARC-' metadata={'source': 'pdfs/paper_2.pdf', 'page': 8}
page_content='Challenge (Clark et al., 2018) and 10-shot HellaSwag (Zellers et al., 2019). Our evaluation metrics\nare benchmark-specific: for BoolQ, PIQA, and WinoGrande, we report accuracy; for TruthfulQA, we\nreport mc2; and for ARC-C and HellaSwag, we report normalized accuracy.\nTable 3 summarizes the results. It is observed that, PoSE-extended models exhibit only marginal\nperformance degradation compared with Full-length fine-tuning and the original LLaMA, with the' metadata={'source': 'pdfs/paper_2.pdf', 'page': 8}
page_content='only exception of the 128k model employing linear interpolation. This indicates that while extending\ncontext window size, PoSE effectively preserves original language comprehension ability.\n9' metadata={'source': 'pdfs/paper_2.pdf', 'page': 8}
page_content='6 C ONCLUSION\nIn this paper, we introduce Positional Skip-wis E(PoSE) training to efficiently extend the context\nwindow of Large Language Models. PoSE simulates long inputs by manipulating position indices,\nthereby requiring only the original context window for fine-tuning, successfully decoupling train\nlength and target length. Experiments have shown that, compared with fine-tuning on the full length,\nPoSE greatly reduces memory and time overhead. Taking advantage of this, we have managed to' metadata={'source': 'pdfs/paper_2.pdf', 'page': 9}
page_content='extend LLaMA model to 128k on 8 V100 GPUs, observing only minimal performance degradation on\nstandard benchmarks. We have also empirically verified that PoSE is compatible with all RoPE-based\nLLMs and position interpolation strategies.\nREFERENCES\nBaichuan. Baichuan 2: Open large-scale language models. arXiv preprint arXiv:2309.10305 , 2023.\nURLhttps://arxiv.org/abs/2309.10305 .\nYonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. Piqa: Reasoning' metadata={'source': 'pdfs/paper_2.pdf', 'page': 9}
page_content='about physical commonsense in natural language. In Thirty-Fourth AAAI Conference on Artificial\nIntelligence , 2020.\nTom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal,\nArvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are\nfew-shot learners. Advances in neural information processing systems , 33:1877–1901, 2020.\nAydar Bulatov, Yury Kuratov, and Mikhail Burtsev. Recurrent memory transformer. Advances in' metadata={'source': 'pdfs/paper_2.pdf', 'page': 9}
page_content='Neural Information Processing Systems , 35:11079–11091, 2022.\nShouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. Extending context window of\nlarge language models via positional interpolation. ArXiv , abs/2306.15595, 2023a.\nYukang Chen, Shengju Qian, Haotian Tang, Xin Lai, Zhijian Liu, Song Han, and Jiaya Jia. Longlora:\nEfficient fine-tuning of long-context large language models. arXiv preprint arXiv:2309.12307 ,\n2023b.' metadata={'source': 'pdfs/paper_2.pdf', 'page': 9}
page_content='Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina\nToutanova. Boolq: Exploring the surprising difficulty of natural yes/no questions. In NAACL ,\n2019.\nPeter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and\nOyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge.\narXiv preprint arXiv:1803.05457 , 2018.' metadata={'source': 'pdfs/paper_2.pdf', 'page': 9}
page_content='Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G Carbonell, Quoc Le, and Ruslan Salakhutdinov.\nTransformer-xl: Attentive language models beyond a fixed-length context. In Proceedings of the\n57th Annual Meeting of the Association for Computational Linguistics , pp. 2978–2988, 2019.\nTri Dao. FlashAttention-2: Faster attention with better parallelism and work partitioning. 2023.\nTri Dao, Daniel Y . Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. FlashAttention: Fast and' metadata={'source': 'pdfs/paper_2.pdf', 'page': 9}
page_content='memory-efficient exact attention with IO-awareness. In Advances in Neural Information Processing\nSystems , 2022.\nHugging Face. Open llm leaderboard. https://huggingface.co/spaces/\nHuggingFaceH4/open_llm_leaderboard , 2023.\nLeo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang,\nHorace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. The Pile: An 800gb\ndataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027 , 2020.' metadata={'source': 'pdfs/paper_2.pdf', 'page': 9}
page_content='Adi Haviv, Ori Ram, Ofir Press, Peter Izsak, and Omer Levy. Transformer language models without\npositional encodings still learn positional information. In Findings of the Association for Computa-\ntional Linguistics: EMNLP 2022 , pp. 1382–1390, Abu Dhabi, United Arab Emirates, December\n2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.findings-emnlp.99.\n10' metadata={'source': 'pdfs/paper_2.pdf', 'page': 9}
page_content='Luyang Huang, Shuyang Cao, Nikolaus Parulian, Heng Ji, and Lu Wang. Efficient attentions for long\ndocument summarization. In Proceedings of the 2021 Conference of the North American Chapter\nof the Association for Computational Linguistics: Human Language Technologies , pp. 1419–1436,\nOnline, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.\n112.\nkaiokendev. Things i’m learning while training superhot. https://kaiokendev.github.io/' metadata={'source': 'pdfs/paper_2.pdf', 'page': 10}
page_content='til#extending-context-to-8k , 2023.\nSakaguchi Keisuke, Le Bras Ronan, Bhagavatula Chandra, and Choi Yejin. Winogrande: An\nadversarial winograd schema challenge at scale. 2019.\nWoosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E.\nGonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model\nserving with pagedattention, 2023.\nBenjamin Lefaudeux, Francisco Massa, Diana Liskovich, Wenhan Xiong, Vittorio Caggiano, Sean' metadata={'source': 'pdfs/paper_2.pdf', 'page': 10}
page_content='Naren, Min Xu, Jieru Hu, Marta Tintore, Susan Zhang, Patrick Labatut, and Daniel Haziza.\nxformers: A modular and hackable transformer modelling library. https://github.com/\nfacebookresearch/xformers , 2022.\nMukai Li, Shansan Gong, Jiangtao Feng, Yiheng Xu, Jun Zhang, Zhiyong Wu, and Lingpeng Kong.\nIn-context learning with many demonstration examples. arXiv preprint arXiv:2302.04931 , 2023.\nStephanie Lin, Jacob Hilton, and Owain Evans. Truthfulqa: Measuring how models mimic human' metadata={'source': 'pdfs/paper_2.pdf', 'page': 10}
page_content='falsehoods. In Proceedings of the 60th Annual Meeting of the Association for Computational\nLinguistics (Volume 1: Long Papers) , pp. 3214–3252, 2022.\nAmirkeivan Mohtashami and Martin Jaggi. Landmark attention: Random-access infinite context\nlength for transformers, 2023.\nBowen Peng and Jeffrey Quesnelle. Ntk-aware scaled rope allows llama models\nto have extended (8k+) context size without any fine-tuning and minimal perplex-\nity degradation. https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/' metadata={'source': 'pdfs/paper_2.pdf', 'page': 10}
page_content='ntkaware_scaled_rope_allows_llama_models_to_have , 2023.\nBowen Peng, Jeffrey Quesnelle, Honglu Fan, and Enrico Shippole. Yarn: Efficient context window\nextension of large language models, 2023.\nOfir Press, Noah A Smith, and Mike Lewis. Train short, test long: Attention with linear biases\nenables input length extrapolation. arXiv preprint arXiv:2108.12409 , 2021.\nShawn Presser. https://twitter.com/theshawwn/status/\n1320282149329784833 , 2020.' metadata={'source': 'pdfs/paper_2.pdf', 'page': 10}
page_content='Jeffrey Quesnelle. Dynamically scaled rope further increases performance of long context llama with\nzero fine-tuning. https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/\ndynamically_scaled_rope_further_increases/ , 2023.\nJack W Rae, Anna Potapenko, Siddhant M Jayakumar, Chloe Hillier, and Timothy P Lillicrap.\nCompressive transformers for long-range sequence modelling. arXiv preprint , 2019.\nSamyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. Zero: Memory optimizations' metadata={'source': 'pdfs/paper_2.pdf', 'page': 10}
page_content='toward training trillion parameter models. In SC20: International Conference for High Performance\nComputing, Networking, Storage and Analysis , pp. 1–16. IEEE, 2020.\nAnian Ruoss, Grégoire Delétang, Tim Genewein, Jordi Grau-Moya, Róbert Csordás, Mehdi Bennani,\nShane Legg, and Joel Veness. Randomized positional encodings boost length generalization of\ntransformers. In Proceedings of the 61st Annual Meeting of the Association for Computational' metadata={'source': 'pdfs/paper_2.pdf', 'page': 10}
page_content='Linguistics (Volume 2: Short Papers) , pp. 1889–1903, Toronto, Canada, July 2023. Association for\nComputational Linguistics. doi: 10.18653/v1/2023.acl-short.161.\nJianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, and Yunfeng Liu. Roformer: Enhanced\ntransformer with rotary position embedding. arXiv preprint arXiv:2104.09864 , 2021.\n11' metadata={'source': 'pdfs/paper_2.pdf', 'page': 10}
page_content='Yutao Sun, Li Dong, Barun Patra, Shuming Ma, Shaohan Huang, Alon Benhaim, Vishrav Chaudhary,\nXia Song, and Furu Wei. A length-extrapolatable transformer. In Proceedings of the 61st\nAnnual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pp.\n14590–14604, Toronto, Canada, July 2023. Association for Computational Linguistics. doi:\n10.18653/v1/2023.acl-long.816.\nHugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée' metadata={'source': 'pdfs/paper_2.pdf', 'page': 11}
page_content='Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and\nefficient foundation language models. arXiv preprint arXiv:2302.13971 , 2023a.\nHugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay\nBashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation\nand fine-tuned chat models. arXiv preprint arXiv:2307.09288 , 2023b.' metadata={'source': 'pdfs/paper_2.pdf', 'page': 11}
page_content='Szymon Tworkowski, Konrad Staniszewski, Mikołaj Pacek, Yuhuai Wu, Henryk Michalewski,\nand Piotr Miło ´s. Focused transformer: Contrastive training for context scaling. arXiv preprint\narXiv:2307.03170 , 2023.\nBen Wang and Aran Komatsuzaki. GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model.\nhttps://github.com/kingoflolz/mesh-transformer-jax , May 2021.\nWeizhi Wang, Li Dong, Hao Cheng, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, and Furu Wei.' metadata={'source': 'pdfs/paper_2.pdf', 'page': 11}
page_content='Augmenting language models with long-term memory. arXiv preprint arXiv:2306.07174 , 2023.\nYuhuai Wu, Markus N Rabe, DeLesley Hutchins, and Christian Szegedy. Memorizing transformers.\narXiv preprint arXiv:2203.08913 , 2022.\nRowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine\nreally finish your sentence? In Proceedings of the 57th Annual Meeting of the Association for\nComputational Linguistics , pp. 4791–4800, 2019.' metadata={'source': 'pdfs/paper_2.pdf', 'page': 11}
page_content='Azerbayev Zhangir, Ayers Edward, and Bartosz Piotrowski. Proof-pile. https://github.com/\nzhangir-azerbayev/proof-pile , 2022.\nYucheng Zhou, Tao Shen, Xiubo Geng, Chongyang Tao, Guodong Long, Can Xu, and Daxin Jiang.\nFine-grained distillation for long document retrieval. arXiv preprint arXiv:2212.10423 , 2022.\n12' metadata={'source': 'pdfs/paper_2.pdf', 'page': 11}
page_content='Table 4: Comparison of different methods for choosing vi. We report perplexity with evaluation\ncontext window ranging from 2k to 16k. We show that these variations have relatively little impact\non the outcomes of fine-tuning.\nMethodGovReport Proof-pile\n2k 4k 8k 16k 2k 4k 8k 16k\nvi∼ U (. . .) 4.84 4.68 4.60 4.60 2.95 2.74 2.61 2.60\nvi= 0 4.85 4.72 4.64 4.68 2.96 2.75 2.63 2.61\nvi=ui 4.84 4.68 4.60 4.60 2.95 2.73 2.60 2.56\nA A BLATION OF TEXT CONTAINED WITHIN EACH CHUNK' metadata={'source': 'pdfs/paper_2.pdf', 'page': 12}
page_content='PoSE divide the original context window into several chunks, and modify the position indices of each\nchunk to cover a wider range of relative positions in a fixed window. However, it does not impose a\nparticular constraint on the text contained within each chunk. Recall that in Equation 5, we assign the\ncontent of chunk cias below:\nci=x[vi+sti:vi+sti+li]\nIn this section, we explore several strategies for determining vi: 1) sampling from uniform distribution,' metadata={'source': 'pdfs/paper_2.pdf', 'page': 12}
page_content='vi∼ U({vi−1, . . . , L x−Lc), which is the one used in PoSE; 2) vi= 0, which results in genuinely\ncontinuous content for the chunks; 3) vi=ui, aligning the manipulated position indices with actual\npositions in the original text. We use the same test setting as Section 4.2, extending LLaMA-7B from\n2k to 16k context. As can be seen in Table 4, we show that these variations have relatively little\nimpact on the outcomes of fine-tuning.\nB A NALYSIS OF CHUNK NUMBER N' metadata={'source': 'pdfs/paper_2.pdf', 'page': 12}

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
10
Add dataset card