text "page_content='POSE: E FFICIENT CONTEXT WINDOW EXTENSION OF\nLLM S VIA POSITIONAL SKIP-WISE TRAINING\nDawei Zhu∗†‡Nan Yang‡Liang Wang‡Yifan Song†Wenhao Wu†\nFuru Wei‡Sujian Li†B\n†School of Computer Science, Peking University\n‡Microsoft Corporation\nhttps://github.com/dwzhu-pku/PoSE\nABSTRACT\nLarge Language Models (LLMs) are trained with a pre-defined context length,\nrestricting their use in scenarios requiring long inputs. Previous efforts for adapting' metadata={'source': 'pdfs/paper_2.pdf', 'page': 0}" "page_content='LLMs to a longer length usually requires fine-tuning with this target length ( Full-\nlength fine-tuning), suffering intensive training cost. To decouple train length\nfrom target length for efficient context window extension, we propose Positional\nSkip-wis E(PoSE) training that smartly simulates long inputs using a fixed context\nwindow. This is achieved by first dividing the original context window into several\nchunks, then designing distinct skipping bias terms to manipulate the position' metadata={'source': 'pdfs/paper_2.pdf', 'page': 0}" "page_content='indices of each chunk. These bias terms and the lengths of each chunk are altered\nfor every training example, allowing the model to adapt to all positions within\ntarget length. Experimental results show that PoSE greatly reduces memory and\ntime overhead compared with Full-length fine-tuning, with minimal impact on per-\nformance. Leveraging this advantage, we have successfully extended the LLaMA\nmodel to 128k tokens using a 2k training context window. Furthermore, we empir-' metadata={'source': 'pdfs/paper_2.pdf', 'page': 0}" "page_content='ically confirm that PoSE is compatible with all RoPE-based LLMs and position\ninterpolation strategies. Notably, our method can potentially support infinite length,\nlimited only by memory usage in inference. With ongoing progress for efficient\ninference, we believe PoSE can further scale the context window beyond 128k.\n1 I NTRODUCTION\nLarge Language Models (LLMs) have revolutionized language modeling and demonstrated impres-' metadata={'source': 'pdfs/paper_2.pdf', 'page': 0}" "page_content='sive abilities to perform various tasks (Brown et al., 2020). However, even with their remarkable\ncapacity, these LLMs remain restricted by pre-defined context window sizes, suffering from notable\nperformance decline when input tokens exceeds these limits. Nevertheless, numerous application\nscenarios demand extremely long input sequences, including long document summarization (Huang\net al., 2021), in-context learning with numerous examples (Li et al., 2023), and long document' metadata={'source': 'pdfs/paper_2.pdf', 'page': 0}" "page_content='retrieval (Zhou et al., 2022), etc. This naturally poses a significant challenge of context window\nextension : Extending the context window of a pre-trained LLM to accommodate longer sequences.\nNaively fine-tuning LLMs on inputs of target length for window extension has received limited success\ndue to the large disruption introduced by new position indices (Chen et al., 2023a). Addressing\nthis, Position Interpolation (Chen et al., 2023a; kaiokendev, 2023; Peng et al., 2023) propose to' metadata={'source': 'pdfs/paper_2.pdf', 'page': 0}" "page_content='down-scale the position indices to match the original window size, yielding improved results for\ncontext extension. However, these methods still rely on Full-length fine-tuning, i.e., fine-tuning with\ncontext of target length, which is memory and time-intensive due to the computational complexity\nthat increases quadratically with input length. For example, Chen et al. (2023a) use 32 A100 GPUs\nto extend LLaMA models from 2k to 8k context, and 128 A100 GPUs for even larger context. These' metadata={'source': 'pdfs/paper_2.pdf', 'page': 0}" "page_content='computational cost has made it impossible to extend context window to extreme lengths.\nIn this paper, we introduce Positional Skip-wis E(PoSE) fine-tuning to decouple the fine-tuning\nlength from the target context window length, unleashing the possibility of efficiently extending\n∗Work done during internship at MSRA.\n1arXiv:2309.10400v2 [cs.CL] 10 Oct 2023' metadata={'source': 'pdfs/paper_2.pdf', 'page': 0}" "page_content='0 511 ... 6560 8095 ...\n0 1023 ... 3552 4575 ...... 0 8191 1Full-length: 8192 tok ens\nPoSE: 2048 tok ens\nskip\nskip target / original context siz e = 8192 / 2048\n1\n2AssetsTrain Example Relative Positions\n# 1 [1,1535] ∪ [6049,8095]\n# 2 [1,1024] ∪ [2529,4575]\n... ...Figure 1: Position indices of Full-length fine-tuning v.s. PoSE fine-tuning for extending the context\nwindow size from 2,048 to 8,192. At each iteration, the former directly takes 8,192 tokens for' metadata={'source': 'pdfs/paper_2.pdf', 'page': 1}" "page_content='fine-tuning, while PoSE manipulates the position indices of 2,048 tokens to simulate longer inputs.\nFor example, we partition the original context window of 2,048 tokens into two chunks, and adjust\nthe position indices of the second chunk by adding a distinct skipping bias term. These bias terms, as\nwell as the length of each chunk, are altered for each training example, so that the model can adapt to\nall relative positions of the target context window through fine-tuning.' metadata={'source': 'pdfs/paper_2.pdf', 'page': 1}" "page_content='context window to an extreme size. The key idea of PoSE is to simulate long inputs by manipulating\nposition indices within a fixed context window. As depicted in Figure 1, we partition the original\ncontext window into several chunks, and adjust the position indices of each chunk by adding a distinct\nskipping bias term. These bias terms, as well as the length of each chunk, are altered for each training' metadata={'source': 'pdfs/paper_2.pdf', 'page': 1}" "page_content='example, so that the model can adapt to all positions (including both absolute and relative) within\nthe target context window through fine-tuning. Meanwhile, by maintaining continuous position\nindices within each chunk, PoSE bears a close resemblance to pre-training. As a result, the model’s\npre-trained capacity for language modeling and comprehension is retained to the greatest degree.\nThe advantages of our PoSE are threefold: 1) Memory and Time Efficiency: By only requiring' metadata={'source': 'pdfs/paper_2.pdf', 'page': 1}" "page_content='the original context size for fine-tuning, PoSE circumvents the quadratic increase in computational\ncomplexity with respect to target length during the fine-tuning stage, thereby significantly reducing\nmemory and time overhead. 2) Potential for Extremely-Long Context: We manage to extend the\ncontext window of LLaMA (Touvron et al., 2023a) by up to 64 times (2k − →128k, k=1,024) while pre-\nserving decent ability of language modeling and understanding. 3) Compatible with all RoPE-based' metadata={'source': 'pdfs/paper_2.pdf', 'page': 1}" "page_content='LLMs and PI strategies: The effectiveness of PoSE has been empirically validated across several\nrepresentative RoPE-based LLMs, including LLaMA, LLaMA2 (Touvron et al., 2023b), GPT-J (Wang\n& Komatsuzaki, 2021), and Baichuan (Baichuan, 2023). Additionally, PoSE has been demonstrated\nto be compatible with a variety of position interpolation methods, including Linear (Chen et al.,\n2023a), NTK (Peng & Quesnelle, 2023), and YaRN (Peng et al., 2023) interpolation.' metadata={'source': 'pdfs/paper_2.pdf', 'page': 1}" "page_content='Notably, by decoupling the fine-tuning and target length, PoSE can theoretically extend context\nwindow to an infinite length. The only constraint is the memory usage during the inference phase.\nHopefully, with the continuous advancements in efficient inference techniques, including Flash\nAttention (Dao et al., 2022; Dao, 2023), xFormers (Lefaudeux et al., 2022), vLLM (Kwon et al.,\n2023), etc, we believe PoSE can promisingly push the context window size to a even larger scale.\n2 R ELATED WORK' metadata={'source': 'pdfs/paper_2.pdf', 'page': 1}" "page_content='Training Length-Extrapolatable Models. Length extrapolation aims to ensure that the model\ncontinues to perform well, even when the number of input tokens during inference exceeds the size\nof the context window on which the model is trained (Press et al., 2021). To this end, a series of\npositional embedding schemes have been proposed, including ALibi (Press et al., 2021), xPos (Sun\net al., 2023), NoPos (Haviv et al., 2022), etc.' metadata={'source': 'pdfs/paper_2.pdf', 'page': 1}" "page_content='Similar to our work, Ruoss et al. (2023) also attempted to simulate longer sequences during\ntraining time to mitigate out-of-distribution lengths. They proposed randomized positional encoding\n(RandPos), which randomly selected an ordered subset of position indices from longer sequences.\n2' metadata={'source': 'pdfs/paper_2.pdf', 'page': 1}" "page_content='Our proposed method, PoSE, diverges from their approach in several key aspects: First, RandPos\nis a positional embedding scheme designed for pre-training encoder-only models from scratch to\nenhance length generalization ability. In contrast, PoSE is a fine-tuning method that aims to efficiently\nextend the context window of pre-trained LLMs, the majority of which follow a decoder-only\narchitecture. Second, in RandPos, the position indices between adjacent tokens are not continuous.' metadata={'source': 'pdfs/paper_2.pdf', 'page': 2}" "page_content='However, in PoSE, the position indices within each chunk are intentionally made continuous to closely\nresemble the pre-training phase, therefore reducing the risk of disrupting the language modeling and\nunderstanding abilities learned during the pre-training stage.\nFine-tuning LLMs for Longer Context. Differing from length extrapolation, which primarily\ninvolves training a model from scratch to support lengths exceeding those it was initially trained for,' metadata={'source': 'pdfs/paper_2.pdf', 'page': 2}" "page_content='context window extension focuses on extending the context window of a pre-trained LLM. Directly\nfine-tuning an existing LLM with a longer context window has been shown to progress slowly (Chen\net al., 2023a). To expedite and stabilize training, Chen et al. (2023a) first down-scaled position\nindices to match original context size through Linear Position Interpolation. Subsequently, a range\nof Positional Interpolation (PI) strategies have been introduced, including NTK (Peng & Quesnelle,' metadata={'source': 'pdfs/paper_2.pdf', 'page': 2}" "page_content='2023) and YaRN (Peng et al., 2023). More recently, LongLora (Chen et al., 2023b) propose shift short\nattention to approximate full attention. However, all these methods require Full-length fine-tuning,\nsuffering computational cost that grows with target context size. By contrast, our method managed to\ndecouple train / target length, requiring only the original context size for fine-tuning.\nMemory Transformers. An alternative strategy for managing extremely long input sequences' metadata={'source': 'pdfs/paper_2.pdf', 'page': 2}" "page_content='involves the adoption of memory mechanisms. Typically, there are two lines of research for utilizing\nmemory: the recurrence-based approach (Dai et al., 2019; Bulatov et al., 2022) and the retrieval-based\napproach (Wu et al., 2022; Wang et al., 2023; Tworkowski et al., 2023). The recurrence-based\napproach involves segmenting long inputs and reusing the hidden states obtained from preceding\nsegments to serve as memory for the current segment. Nonetheless, this architecture is hindered' metadata={'source': 'pdfs/paper_2.pdf', 'page': 2}" "page_content='by information loss and limited capacity for random access. On the other hand, the retrieval-based\nparadigm entails encoding prior sequences as (key, value) pairs and utilizing a memory retriever\nand reader to extract previously encoded information. The primary limitation of this approach is\nthe absence of interaction between discrete memory segments. More recently, Mohtashami & Jaggi\n(2023) introduced landmark attention, which facilitates random access to any chunk of the input by' metadata={'source': 'pdfs/paper_2.pdf', 'page': 2}" "page_content='introducing landmark tokens. In contrast, our method achieves full access to the entire input without\nany modifications to the attention mechanism.\n3 M ETHODOLOGY\n3.1 P RELIMINARIES\nRotary Position Embedding (RoPE). The use of RoPE (Su et al., 2021) has become pervasive\nin contemporary LLMs, including LLaMA (Touvron et al., 2023a), GPT-J (Wang & Komatsuzaki,\n2021), etc. It encodes position information of tokens with a rotation matrix that naturally incorporates' metadata={'source': 'pdfs/paper_2.pdf', 'page': 2}" "page_content='explicit relative position dependency. To elucidate, given a hidden vector h= [h0, h1, ..., h d−1],\nwhere dis the hidden dimension, and a position index m, RoPE operates as follows:\nf(h, m) =\uf8eb\n\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8edh0\nh1\nh2\nh3\n...\nhd−2\nhd−1\uf8f6\n\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f8⊗\uf8eb\n\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8edcosmθ0\ncosmθ0\ncosmθ1\ncosmθ1\n...\ncosmθd/2−1\ncosmθd/2−1\uf8f6\n\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f8+\uf8eb\n\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8ed−h1\nh0\n−h3\nh2\n...\n−hd−1\nhd−2\uf8f6\n\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f8⊗\uf8eb\n\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8ec\uf8edsinmθ0\nsinmθ0\nsinmθ1\nsinmθ1\n...\nsinmθd/2−1\nsinmθd/2−1\uf8f6\n\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f7\uf8f8(1)' metadata={'source': 'pdfs/paper_2.pdf', 'page': 2}" "page_content='where θj= 10000−2j/d, j∈ {0,1, ..., d/ 2−1}. Unlike previous absolute position encodings that\nare directly applied to the input vector x, RoPE is employed on the query and key vectors at each\nlayer. Considering a query vector qat position mand a key vector kat position n, the attention score\n3' metadata={'source': 'pdfs/paper_2.pdf', 'page': 2}" "page_content='a(q,k)is defined as follows:\na(q,k) =< f(q, m), f(k, n)>\n=d/2−1X\nj=0[(q2jk2j+q2j+1k2j+1) cos ( m−n)θj+ (q2jk2j+1−q2j+1k2j) sin ( m−n)θj]\n:=g(q,k,θ, m−n) (2)\nHence, RoPE encodes position information in a relative manner, as the attention score depends on the\nrelative distances between positions rather than their absolute position values.\nProblem Formulation. Given a Large Language Model pre-trained with a context window size of' metadata={'source': 'pdfs/paper_2.pdf', 'page': 3}" "page_content='Lc, our objective is to extend this context size to a target length Lt, so that the model maintains good\nperformance when processing input sequences containing a maximum of Lttokens.\nPosition Interpolation (PI). In contrast to directly extending the position indices to Lt−1when\ndealing with an input text x={x0, x1, ..., x Lt}, position interpolation down-scales the position\nindices to align with the original context window size Lc. This approach effectively mitigates the' metadata={'source': 'pdfs/paper_2.pdf', 'page': 3}" "page_content='risk of encountering extreme values and has been empirically demonstrated to enhance stability\nduring fine-tuning. Various interpolation strategies have been proposed, with α=Lt/Lcdenoting\nthe scaling factor:\n•Linear Interpolation. As described by Chen et al. (2023a) and kaiokendev (2023), linear interpola-\ntion involves a proportional down-scaling of the position index mtom/α . Consequently, the atten-\ntion score between a query qat position mand a key kat position nbecomes g(q,k,θ,(m−n)/α),' metadata={'source': 'pdfs/paper_2.pdf', 'page': 3}" "page_content='as defined in Equation 2. Theoretical analysis has substantiated that the interpolated attention score\nexhibits significantly greater stability compared to the extrapolated counterpart.\n•Neural Tangent Kernel (NTK) Interpolation. In contrast to linear interpolation, NTK Interpola-\ntion alters the base of RoPE, effectively modifying the rotational ""speed"" of each dimension of\nRoPE (Peng & Quesnelle, 2023). Specifically, the original θj= 10000−2j/d, j∈ {0,1, ..., d/ 2−1}' metadata={'source': 'pdfs/paper_2.pdf', 'page': 3}" "page_content='in RoPE is transformed into θ′\nj= (10000 λ)−2j/d, where λ=αd/d−2. It is noteworthy that the\nvalue of λis chosen to ensure that mθ′\nd/2−1= (m/α)θd/2−1.\n•YaRN Interpolation. Different from Linear and NTK interpolation that treat each dimension of\nRoPE equally, YaRN (Peng et al., 2023) employs a ramp function to combine Linear and NTK\ninterpolation at varying proportions across different dimensions. Simultaneously, it introduces a' metadata={'source': 'pdfs/paper_2.pdf', 'page': 3}" "page_content='temperature factor to mitigate distribution shift of attention matrix caused by long inputs.\n3.2 P ROPOSED APPROACH : POSITIONAL SKIP-WISE TRAINING (POSE)\nAlthough position interpolation effectively addresses out-of-distribution position indices, extending\nto an extreme length by fine-tuning on context window of this size remains impractical, owing to the\nquadratic growth in computational complexity of attention as sequence length increases. Instead, we' metadata={'source': 'pdfs/paper_2.pdf', 'page': 3}" "page_content='explore to train within the original context window Lcand achieve context window extension via\nmanipulating position indices to simulate longer inputs.\nThere are two designing desiderata for this endeavor: First, to avoid out-of-distribution positions\nduring inference, the relative distance of manipulated position indices should comprehensively cover\nthe range of {1, . . . , L t−1}. Second, fine-tuning with the manipulated position indices should not' metadata={'source': 'pdfs/paper_2.pdf', 'page': 3}" "page_content='harm the original abilities of LLMs, so the structure of manipulated position indices should closely\nadhere to the original structure to the greatest extent possible.\nInitially, we randomly divide the original context window LcintoNchunks c0, c1, . . . , c N−1, each\nwith lengths l0, l1, . . . , l N−1, wherePN−1\ni=0li=Lc. We introduce the starting index stifor each\nchunk ci, which facilitates the formulation of its position indices as follows:\nPos(ci) ={sti, sti+ 1, . . . , st i+li−1}, st i=i−1X' metadata={'source': 'pdfs/paper_2.pdf', 'page': 3}" "page_content='j=0lj (3)\nSubsequently, we employ the discrete uniform distribution U(S)to sample a skipping bias term\nui∼ U({ui−1, . . . , L t−Lc})for each chunk ci. This bias term is applied to the corresponding\n4' metadata={'source': 'pdfs/paper_2.pdf', 'page': 3}" "page_content='chunk to transform the original position indices into:\nPoSE (ci) ={ui+sti, ui+sti+ 1, . . . , u i+sti+li−1} (4)\nNote that the constraint of ui≥ui−1is applied to prevent position index overlaps between chunks.\nIntuitively, the introduction of skipping bias terms exposes model to a more diverse range of relative\npositions. To achieve comprehensive coverage of the target context window, we re-sample both the' metadata={'source': 'pdfs/paper_2.pdf', 'page': 4}" "page_content='length and skipping bias term of every chunk for each training example. Moreover, the continuity\nof position indices within each chunk closely resembles the structure employed during pre-training.\nConsequently, fine-tuning the model on these new position indices for language modeling does not\ncompromise its original capabilities.\nConcerning the text contained within each chunk, a similar procedure is followed to select continuous' metadata={'source': 'pdfs/paper_2.pdf', 'page': 4}" "page_content='spans of tokens from the input text x={x0, x1, ..., x Lx}. To elaborate, we begin by sampling a bias\ntermvi∼ U({vi−1, . . . , L x−Lc)followed by assigning the content of chunk cias below:\nci=x[vi+sti:vi+sti+li] (5)\nNotably, we have also explored other assigning strategy of vi, including scenarios where vi= 0,\nwhich results in genuinely continuous content for the chunks, or vi=ui, aligning the manipulated' metadata={'source': 'pdfs/paper_2.pdf', 'page': 4}" "page_content='position indices with actual positions in the original text. However, we observe that these variations\nhave relatively little impact on the outcomes of fine-tuning.\nAfter position indices and content for each chunk are settled, we perform position interpolation for\nstabilized fine-tuning. For simplicity, We set the initial bias terms u0andv0to 0. In terms of chunk\nnumber N, we view it as an trade-off between efficiency and effectiveness. Because an increase in the' metadata={'source': 'pdfs/paper_2.pdf', 'page': 4}" "page_content='number of chunks will further deviates from the position structure of pre-training, which may harm\nthe ability acquired during pre-training. Hence, in this paper we set Nto 2, exposing the models to a\nwider range of relative positions, while adhering as close to the original position structure as possible.\n(See Appendxi A and B for further discussion of viandN.)\n4 E XPERIMENTS\nIn this section, we conduct experiments to verify the effectiveness of PoSE for context window' metadata={'source': 'pdfs/paper_2.pdf', 'page': 4}" "page_content='extension. Our method demonstrates impressive results on context lengths of both 16k and 32k for\nlanguage modeling as well as passkey retrieval. Other advantages of PoSE are discussed in Section 5.\n4.1 S ETUPS\nTraining Procedure. For each setting in the main experiments, we train LLaMA-7B with the next\ntoken prediction objective. This training process comprises 1,000 steps, employing a global batch size' metadata={'source': 'pdfs/paper_2.pdf', 'page': 4}" "page_content='of 64 on 8 V100 GPUs using Deepspeed ZeRO stage 3 (Rajbhandari et al., 2020). The fine-tuning\ndataset is sourced from The Pile (Gao et al., 2020), with a minimum length requirement of 2,048\ntokens. Our default choice for interpolation strategies is linear interpolation. For evaluation, we use a\nsingle A100 GPU. Flash Attention V2 (Dao, 2023) is applied, making it possible to evaluate long\ndocuments of up to 128k tokens (k=1,024)' metadata={'source': 'pdfs/paper_2.pdf', 'page': 4}" "page_content='Evaluation Tasks and Datasets. We examine the ability of long text modeling on two tasks:\nlanguage modeling and passkey retrieval. The language modeling task is a fundamental task that\nreflects the overall capability of a model in handling long text. Passkey retrieval, on the other hand,\ncan effectively measure the maximum distance that a token can attend to during the inference stage.\nWe evaluate language modeling on GovReport (Huang et al., 2021) and Proof-pile (Zhangir et al.,' metadata={'source': 'pdfs/paper_2.pdf', 'page': 4}" "page_content='2022) datasets. For passkey retrieval, we follow Mohtashami & Jaggi (2023) to construct synthetic\nprompts for evaluation.\nBaseline Methods. We compare our PoSE training method against following baselines:\n•Full-length fine-tuning takes input tokens of target length for fine-tuning. For this method, compu-\ntation complexity scales quadratically with target context window size.\n5' metadata={'source': 'pdfs/paper_2.pdf', 'page': 4}" "page_content='Table 1: Perplexity of models trained with different methods. We conduct evaluation on the GovReport\nand Proof-pile datasets, varying evaluation context window size from 2k to 32k. Our PoSE, with\na fixed training window size of 2k, effectively extended to a target context size of 16k / 32k for\ninference while receiving only minimal performance degradation compared to Full-length.\nMethodContext size GovReport Proof-pile\nTrain / Target 2k 4k 8k 16k 32k 2k 4k 8k 16k 32k' metadata={'source': 'pdfs/paper_2.pdf', 'page': 5}" "page_content='None - / - 4.74 >103>103>103>1032.83 >103>103>103>103\nFull-length 16k / 16k 4.87 4.70 4.61 4.59 - 2.93 2.71 2.58 2.53 -\nRandPos2k / 16k 11.63 11.17 11.54 15.16 - 7.26 6.83 6.76 7.73 -\n2k / 32k 93.43 95.85 91.79 93.22 97.57 60.74 63.54 60.56 63.15 66.47\nPoSE (Ours)2k / 16k 4.84 4.68 4.60 4.60 - 2.95 2.74 2.61 2.60 -\n2k / 32k 4.91 4.76 4.68 4.64 4.66 3.01 2.78 2.66 2.60 2.59\n•RandPos (Ruoss et al., 2023) is initially designed to train an encoder-only model from scratch' metadata={'source': 'pdfs/paper_2.pdf', 'page': 5}" "page_content='for length extrapolation. However, since it shares similar idea of simulating longer sequences via\nchanging position indices, we include it for a comprehensive comparison. Given the original /\ntarget context window length Lc/Lt, it uniquely samples Lcpositions from the set {0, ..., L t−1},\narranges them in ascending order, and employs them as new position indices for training.\n4.2 L ANGUAGE MODELING' metadata={'source': 'pdfs/paper_2.pdf', 'page': 5}" "page_content='First, we investigate the impacts of different fine-tuning methods on long sequence language modeling\nusing the GovReport and Proof-pile datasets. GovReport is a summarization dataset comprising\n19,402 reports published by the Congress and the U.S. Government, with an average document length\nof 7,866 tokens. We randomly select 50 reports containing more than 32,768 tokens for evaluation.\nSimilarly, Proof-pile is a 13GB mathematical dataset of long mathematical documents. In line with' metadata={'source': 'pdfs/paper_2.pdf', 'page': 5}" "page_content='the approach taken for GovReport, we choose 50 samples from Proof-pile that contain more than\n32,768 tokens for evaluation.\nTable 1 presents the results of scaling to 16k and 32k using Full-length training, RandPos, and PoSE.\nFor each scaled model, as well as the non-fine-tuned LLaMA model (None), we report perplexity\nscores at various evaluation context window sizes, ranging from 2k to 32k, employing the sliding' metadata={'source': 'pdfs/paper_2.pdf', 'page': 5}" "page_content='window approach proposed by Press et al. (2021). For evaluation efficiency, we set the stride of the\nsliding window to 1,024.\nFirst, we observe an overall decreasing trend of perplexity for both models scaled to 16k and 32k via\nPoSE as evaluation context window size increases, proving their abilities to leverage longer context.\nSecond, with significantly shorter context length during fine-tuning, our PoSE achieves comparable' metadata={'source': 'pdfs/paper_2.pdf', 'page': 5}" "page_content='results with Full-length, consolidating its effectiveness. Third, our method achieves much stronger\nresults than RandPos. We suppose it is because our manipulated position indices closely resembles\nthat of pre-training, hereby preserving the pre-trained language modeling ability to the greatest extent.\nWe also notice that all the scaling methods suffers certain performance degradation as the supported' metadata={'source': 'pdfs/paper_2.pdf', 'page': 5}" "page_content='context length increases. We perceive this as a trade-off between the quantity of tokens the model can\nprocess and the level of granularity in the attention the model can pay to each individual token.\n4.3 P ASSKEY RETRIEVAL FOR EFFECTIVE CONTEXT WINDOW\nTo effectively measure the maximum distance that a token can attend to during the inference stage,\nwe adopt the passkey retrieval test proposed by Mohtashami & Jaggi (2023). In this test, models are' metadata={'source': 'pdfs/paper_2.pdf', 'page': 5}" "page_content='tasked with recovering a random passkey hidden within a lengthy document. Prompt template used\nfor this task is presented in Figure 2a.\nSpecifically, we compare the non-fine-tuned LLaMA model (denoted as None ) with the PoSE-\nextended version for 16k and 32k context window size. For each model, we vary the prompt length\nfrom 2k to 32k. In each case, we conduct the passkey retrieval test for 50 times, with a random passkey\n6' metadata={'source': 'pdfs/paper_2.pdf', 'page': 5}" "page_content='There is an important info hidden inside a lot of irrelevant \ntext. Find it and memorize them. I will quiz you about the \nimportant information there.\nThe grass is green. The sky is blue. The sun is yellow. Here \nwe go. There and back again. (repeat x times)\nThe pass key is 81501 . Remember it. 81501 is the pass key.\nThe grass is green. The sky is blue. The sun is yellow. Here \nwe go. There and back again. (repeat y times)\nWhat is the pass key? The pass key is(a)' metadata={'source': 'pdfs/paper_2.pdf', 'page': 6}" "page_content='2k 8k 16k 24k 32k / Tokens020406080100Accuracy (%)None\nPoSE-16k\nPoSE-32k (b)\nFigure 2: (a) Prompt template used for passkey retrieval; (b) retrieval accuracy for the non-fine-tuned\nLLaMA model ( None ), and the PoSE-extended counterparts for 16k / 32k window size. Both PoSE-\nextended models maintain a high retrieval accuracy ( ≥90%) within their respective context window.\nof 5 digits generated and placed at a random position inside the prompt for each trial. Figure 2b' metadata={'source': 'pdfs/paper_2.pdf', 'page': 6}" "page_content='illustrates the results. For the none-fine-tuned LLaMA model ( None ), the retrieval accuracy rapidly\ndrops to 0 when the prompt length exceeds 2k. In contrast, both PoSE-extended models managed\nto maintain a high retrieval accuracy ( ≥90%) within their respective target context window. This\nindicates that models trained via PoSE genuinely possess the capability to attend to all tokens within\nthe extended context windows.\n5 A NALYSIS' metadata={'source': 'pdfs/paper_2.pdf', 'page': 6}" "page_content='In this section, we analyze the advantages of PoSE, including 1) memory and time efficiency;\n2) compatibility with all RoPE-based LLMs and diverse interpolation strategies; 3) potential for\nextremely-long context. In Section 5.4, We also verify that model performance within the original\ncontext window only receives minimal degradation.\n5.1 M EMORY AND TIMEEFFICIENCY\nWe study the memory and time efficiency of PoSE compared with Full-length fine-tuning. For each' metadata={'source': 'pdfs/paper_2.pdf', 'page': 6}" "page_content='method, we scale LLaMA-7B to 4k / 8k / 16k through 1,000 training steps with a global batch size of\n16 on 8 V100 GPUs. Experiment results are demonstrated in Figure 3. Figure 3(a) and (b) respectively\nillustrates memory and time consumption for 1,000 steps of Full-length versus PoSE. While the\ntraining cost of Full-length increases rapidly with target window length, PoSE only requires a fixed\nquota of memory and time for context extension, which is significantly lower. Figure 3(c) further' metadata={'source': 'pdfs/paper_2.pdf', 'page': 6}" "page_content='compares model perplexity of the two training methods at different steps on GovReport. Notably,\nboth models achieve relatively low perplexity levels within the initial 100 training steps. Moreover,\nat each step, our proposed PoSE, while requiring only a training context size of 2k tokens, exhibits\nvery close language modeling ability to Full-length fine-tuning, which requires an extended training\ncontext of 16k. We did not experiment with context window of 32k or above, because V100 machines' metadata={'source': 'pdfs/paper_2.pdf', 'page': 6}" "page_content='cannot afford full fine-tuning of these lengths. But it can be expected that the overhead ration between\nFull-leng and PoSE will become more exaggerated as target length increases. Consequently, we can\nconfidently assert that our proposed approach is both memory and time-efficient.\n5.2 COMPATIBILITY WITH ROPE-B ASED LLM S AND DIVERSE INTERPOLATION STRATEGIES\nWe also delve into the effectiveness of PoSE when applied to different RoPE-based LLMs, as' metadata={'source': 'pdfs/paper_2.pdf', 'page': 6}" "page_content='well as various interpolation strategies. Specifically, we employ PoSE on four distinct models:\nLLaMA-7B, LLaMA2-7B, GPT-J-6B, and Baichuan2-7B, all of which encompasses RoPE in their\narchitectures. The original context size of LLaMA-7B and GPT-J-6B is 2k, while that of LLaMA2-7B\nand Baichuan2-7B is 4k. For each model, we examine the integration with Linear, NTK, and YaRN\ninterpolation, as well as the non-fine-tuned original version for comparative purposes. The same' metadata={'source': 'pdfs/paper_2.pdf', 'page': 6}" "page_content='GovReport dataset as described in Section 4.2 is utilized. The test set is truncated to the first 1k to\n7' metadata={'source': 'pdfs/paper_2.pdf', 'page': 6}" "page_content='4k 8k 16k\n(a)101520253035\nOOMMemory (GB)\nFull-length\nPoSE\n4k 8k 16k\n(b)051015Time (h)\nFull-length\nPoSE\n10 30 100 300 1000 / Steps\n(c)4.75.15.45.7\nPerplexity\nFull-length\nPoSEFigure 3: Full-length fine-tuning v.s. PoSE in terms of (a) Memory and (b) Time consumption\nfor extending LLaMA-7B from 2k to 4k / 8k / 16k context, each finishing 1000 training steps.\n(c) Perplexity of both 16k-context models at every training steps. We show that PoSE takes a' metadata={'source': 'pdfs/paper_2.pdf', 'page': 7}" "page_content='constantly reduced time and memory for context extension, while attaining a comparable level of\nPPL performance with Full-length fine-tuning at each step.\n1k 2k 4k 8k 16k4.85.05.25.4Perplexity\nLLaMA-7B\n1k 2k 4k 8k 16k4.64.85.05.2\nLLaMA2-7B\n1k 2k 4k 8k 16k7.58.08.59.09.5\nGPT-J-6B\n1k 2k 4k 8k 16k5.86.36.87.3\nBaichuan2-7B\nOriginal PoSE-Linear PoSE-NTK PoSE-YaRN\nFigure 4: Perplexity of LLaMA-7B, LLaMA2-7B, GPT-J-6B, Baichuan2-7B extended to 16k via' metadata={'source': 'pdfs/paper_2.pdf', 'page': 7}" "page_content='PoSE with Linear / NTK / YaRN interpolation, along with the non-fine-tuned Original model. The\nconsistently low perplexity observed across all nine combinations serves as an indication of the\neffectiveness of our method across RoPE-based LLMs and diverse interpolation strategies.\n16k tokens for plotting the perplexity curve, as depicted in Figure 4. First, it is evident that PoSE is\neffective across all four models and three interpolation strategies, as evidenced by the low perplexities' metadata={'source': 'pdfs/paper_2.pdf', 'page': 7}" "page_content='achieved by all 12 combinations in comparison to the non-fine-tuned original model. Second, we\nobserve that NTK and YaRN interpolation generally yields superior results compared to Linear\ninterpolation. However, it is noteworthy that NTK exhibits a significant increase in perplexity after\na certain turning point, which occurs prior to reaching the target context length. This behavior is\nconsistent with previous findings, indicating that for a given scaling factor α, NTK cannot genuinely' metadata={'source': 'pdfs/paper_2.pdf', 'page': 7}" "page_content='expand the context window by αtimes (Peng & Quesnelle, 2023; Quesnelle, 2023; Peng et al., 2023).\n5.3 P OTENTIAL FOR EXTREMELY -LONG CONTEXT\nBecause PoSE only takes a fixed context window at training stage to extend to target context window\nsize, we can promisingly extend LLMs to support infinite input lengths using this method. In this\nsection, we extend context window size to 96k and 128k to explore PoSE’s potential for extreme' metadata={'source': 'pdfs/paper_2.pdf', 'page': 7}" "page_content='context window extension. Given the need to evaluate on extremely long documents, we have opted to\nemploy two book datasets, namely Books3 (Presser, 2020) and Gutenberg (PG-19) (Rae et al., 2019).\nBoth of these datasets consist of extensive collections of literary works, rendering them well-suited\nsubjects for the assessment of long-range modeling. For our evaluation, we randomly selected 20\nbooks from each dataset, each containing more than 128k tokens.' metadata={'source': 'pdfs/paper_2.pdf', 'page': 7}" "page_content='Fine-tuning LLaMA models using PoSE, we experimented with Linear / NTK / YaRN interpolation\nfor both the 96k and 128k models. To calculate perplexity, we adhere to the sliding window strategy\nadopted in Section 4.2, with an increased sliding window step of 16k to enhance evaluation efficiency.\n8' metadata={'source': 'pdfs/paper_2.pdf', 'page': 7}" "page_content='Table 2: Perplexity of models extended to extreme context size via PoSE on PG-19 and Books3. We\nshow that our training method can effectively extend context window size to 128k when combined\nwith YaRN interpolation.\nModelGutenberg (PG-19) Books3\n32k 64k 96k 128k 32k 64k 96k 128k\nPoSE-Linear-96k 10.18 11.11 13.57 - 9.98 10.90 13.42 -\nPoSE-NTK-96k 7.98 20.39 38.73 - 8.29 20.82 40.39 -\nPoSE-YaRN-96k 8.31 8.65 9.36 - 8.90 9.40 10.38 -\nPoSE-Linear-128k 16.90 22.47 26.77 31.18 26.20 43.62 57.08 70.87' metadata={'source': 'pdfs/paper_2.pdf', 'page': 8}" "page_content='PoSE-NTK-128k 8.04 14.84 29.48 34.80 8.34 16.04 31.42 37.00\nPoSE-YaRN-128k 9.32 10.36 10.77 11.33 10.56 12.30 13.07 13.81\nTable 3: Performance of PoSE-extended LLaMA model on standard benchmarks in comparison with\nFull-length fine-tuning and the original LLaMA. We show that PoSE-extended models exhibit only\nmarginal performance degradation compared with Full-length fine-tuning and the original version.\nModelZero-Shot Few-Shot\nBoolQ PIQA WinoGrande TruthfulQA ARC-C HellaSwag' metadata={'source': 'pdfs/paper_2.pdf', 'page': 8}" "page_content='LLaMA 75.11 78.67 69.85 34.08 51.19 77.75\nFull-Linear-16k 70.95 77.64 69.06 31.89 48.55 74.19\nFull-NTK-16k 75.80 78.08 68.98 33.83 48.81 76.57\nFull-YaRN-16k 73.88 77.64 68.15 34.12 50.60 77.18\nPoSE-Linear-16k 74.50 78.13 68.59 32.05 48.29 75.56\nPoSE-NTK-16k 74.28 78.24 68.90 33.89 49.83 76.82\nPoSE-YaRN-16k 74.28 78.02 69.06 34.00 49.23 77.04\nPoSE-Linear-128k 67.71 76.22 67.56 36.16 39.93 66.04\nPoSE-NTK-128k 75.35 78.18 68.98 32.71 49.66 76.19\nPoSE-YaRN-128k 73.61 77.80 70.01 34.47 48.46 75.54' metadata={'source': 'pdfs/paper_2.pdf', 'page': 8}" "page_content='The outcomes of these experiments are detailed in Table 2. It is observe that, PoSE successfully\nextends the model’s context window to 96k when coupled with Linear interpolation, and further\nextends the context window to 128k when paired with YaRN. These promising results consolidates\nthe effectiveness of PoSE for extreme context window extension.\n5.4 E VALUATION OF CAPABILITY ON ORIGINAL CONTEXT WINDOW' metadata={'source': 'pdfs/paper_2.pdf', 'page': 8}" "page_content='In this section, we examine the capabilities of the PoSE-extended models on the original context win-\ndow using standard benchmarks. We combine the Hugging Face Open LLM Leaderboard (Face, 2023)\nwith a subset of LLaMA benchmarks to assess zero-shot and few-shot performance. For zero-shot\nevaluation, we employ BoolQ (Clark et al., 2019), PIQA (Bisk et al., 2020), WinoGrande (Keisuke\net al., 2019), and TruthfulQA (Lin et al., 2022). For few-shot evaluation, we utilize 25-shot ARC-' metadata={'source': 'pdfs/paper_2.pdf', 'page': 8}" "page_content='Challenge (Clark et al., 2018) and 10-shot HellaSwag (Zellers et al., 2019). Our evaluation metrics\nare benchmark-specific: for BoolQ, PIQA, and WinoGrande, we report accuracy; for TruthfulQA, we\nreport mc2; and for ARC-C and HellaSwag, we report normalized accuracy.\nTable 3 summarizes the results. It is observed that, PoSE-extended models exhibit only marginal\nperformance degradation compared with Full-length fine-tuning and the original LLaMA, with the' metadata={'source': 'pdfs/paper_2.pdf', 'page': 8}" "page_content='only exception of the 128k model employing linear interpolation. This indicates that while extending\ncontext window size, PoSE effectively preserves original language comprehension ability.\n9' metadata={'source': 'pdfs/paper_2.pdf', 'page': 8}" "page_content='6 C ONCLUSION\nIn this paper, we introduce Positional Skip-wis E(PoSE) training to efficiently extend the context\nwindow of Large Language Models. PoSE simulates long inputs by manipulating position indices,\nthereby requiring only the original context window for fine-tuning, successfully decoupling train\nlength and target length. Experiments have shown that, compared with fine-tuning on the full length,\nPoSE greatly reduces memory and time overhead. Taking advantage of this, we have managed to' metadata={'source': 'pdfs/paper_2.pdf', 'page': 9}" "page_content='extend LLaMA model to 128k on 8 V100 GPUs, observing only minimal performance degradation on\nstandard benchmarks. We have also empirically verified that PoSE is compatible with all RoPE-based\nLLMs and position interpolation strategies.\nREFERENCES\nBaichuan. Baichuan 2: Open large-scale language models. arXiv preprint arXiv:2309.10305 , 2023.\nURLhttps://arxiv.org/abs/2309.10305 .\nYonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. Piqa: Reasoning' metadata={'source': 'pdfs/paper_2.pdf', 'page': 9}" "page_content='about physical commonsense in natural language. In Thirty-Fourth AAAI Conference on Artificial\nIntelligence , 2020.\nTom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal,\nArvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are\nfew-shot learners. Advances in neural information processing systems , 33:1877–1901, 2020.\nAydar Bulatov, Yury Kuratov, and Mikhail Burtsev. Recurrent memory transformer. Advances in' metadata={'source': 'pdfs/paper_2.pdf', 'page': 9}" "page_content='Neural Information Processing Systems , 35:11079–11091, 2022.\nShouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. Extending context window of\nlarge language models via positional interpolation. ArXiv , abs/2306.15595, 2023a.\nYukang Chen, Shengju Qian, Haotian Tang, Xin Lai, Zhijian Liu, Song Han, and Jiaya Jia. Longlora:\nEfficient fine-tuning of long-context large language models. arXiv preprint arXiv:2309.12307 ,\n2023b.' metadata={'source': 'pdfs/paper_2.pdf', 'page': 9}" "page_content='Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina\nToutanova. Boolq: Exploring the surprising difficulty of natural yes/no questions. In NAACL ,\n2019.\nPeter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and\nOyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge.\narXiv preprint arXiv:1803.05457 , 2018.' metadata={'source': 'pdfs/paper_2.pdf', 'page': 9}" "page_content='Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G Carbonell, Quoc Le, and Ruslan Salakhutdinov.\nTransformer-xl: Attentive language models beyond a fixed-length context. In Proceedings of the\n57th Annual Meeting of the Association for Computational Linguistics , pp. 2978–2988, 2019.\nTri Dao. FlashAttention-2: Faster attention with better parallelism and work partitioning. 2023.\nTri Dao, Daniel Y . Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. FlashAttention: Fast and' metadata={'source': 'pdfs/paper_2.pdf', 'page': 9}" "page_content='memory-efficient exact attention with IO-awareness. In Advances in Neural Information Processing\nSystems , 2022.\nHugging Face. Open llm leaderboard. https://huggingface.co/spaces/\nHuggingFaceH4/open_llm_leaderboard , 2023.\nLeo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang,\nHorace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. The Pile: An 800gb\ndataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027 , 2020.' metadata={'source': 'pdfs/paper_2.pdf', 'page': 9}" "page_content='Adi Haviv, Ori Ram, Ofir Press, Peter Izsak, and Omer Levy. Transformer language models without\npositional encodings still learn positional information. In Findings of the Association for Computa-\ntional Linguistics: EMNLP 2022 , pp. 1382–1390, Abu Dhabi, United Arab Emirates, December\n2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.findings-emnlp.99.\n10' metadata={'source': 'pdfs/paper_2.pdf', 'page': 9}" "page_content='Luyang Huang, Shuyang Cao, Nikolaus Parulian, Heng Ji, and Lu Wang. Efficient attentions for long\ndocument summarization. In Proceedings of the 2021 Conference of the North American Chapter\nof the Association for Computational Linguistics: Human Language Technologies , pp. 1419–1436,\nOnline, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.\n112.\nkaiokendev. Things i’m learning while training superhot. https://kaiokendev.github.io/' metadata={'source': 'pdfs/paper_2.pdf', 'page': 10}" "page_content='til#extending-context-to-8k , 2023.\nSakaguchi Keisuke, Le Bras Ronan, Bhagavatula Chandra, and Choi Yejin. Winogrande: An\nadversarial winograd schema challenge at scale. 2019.\nWoosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E.\nGonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model\nserving with pagedattention, 2023.\nBenjamin Lefaudeux, Francisco Massa, Diana Liskovich, Wenhan Xiong, Vittorio Caggiano, Sean' metadata={'source': 'pdfs/paper_2.pdf', 'page': 10}" "page_content='Naren, Min Xu, Jieru Hu, Marta Tintore, Susan Zhang, Patrick Labatut, and Daniel Haziza.\nxformers: A modular and hackable transformer modelling library. https://github.com/\nfacebookresearch/xformers , 2022.\nMukai Li, Shansan Gong, Jiangtao Feng, Yiheng Xu, Jun Zhang, Zhiyong Wu, and Lingpeng Kong.\nIn-context learning with many demonstration examples. arXiv preprint arXiv:2302.04931 , 2023.\nStephanie Lin, Jacob Hilton, and Owain Evans. Truthfulqa: Measuring how models mimic human' metadata={'source': 'pdfs/paper_2.pdf', 'page': 10}" "page_content='falsehoods. In Proceedings of the 60th Annual Meeting of the Association for Computational\nLinguistics (Volume 1: Long Papers) , pp. 3214–3252, 2022.\nAmirkeivan Mohtashami and Martin Jaggi. Landmark attention: Random-access infinite context\nlength for transformers, 2023.\nBowen Peng and Jeffrey Quesnelle. Ntk-aware scaled rope allows llama models\nto have extended (8k+) context size without any fine-tuning and minimal perplex-\nity degradation. https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/' metadata={'source': 'pdfs/paper_2.pdf', 'page': 10}" "page_content='ntkaware_scaled_rope_allows_llama_models_to_have , 2023.\nBowen Peng, Jeffrey Quesnelle, Honglu Fan, and Enrico Shippole. Yarn: Efficient context window\nextension of large language models, 2023.\nOfir Press, Noah A Smith, and Mike Lewis. Train short, test long: Attention with linear biases\nenables input length extrapolation. arXiv preprint arXiv:2108.12409 , 2021.\nShawn Presser. https://twitter.com/theshawwn/status/\n1320282149329784833 , 2020.' metadata={'source': 'pdfs/paper_2.pdf', 'page': 10}" "page_content='Jeffrey Quesnelle. Dynamically scaled rope further increases performance of long context llama with\nzero fine-tuning. https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/\ndynamically_scaled_rope_further_increases/ , 2023.\nJack W Rae, Anna Potapenko, Siddhant M Jayakumar, Chloe Hillier, and Timothy P Lillicrap.\nCompressive transformers for long-range sequence modelling. arXiv preprint , 2019.\nSamyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. Zero: Memory optimizations' metadata={'source': 'pdfs/paper_2.pdf', 'page': 10}" "page_content='toward training trillion parameter models. In SC20: International Conference for High Performance\nComputing, Networking, Storage and Analysis , pp. 1–16. IEEE, 2020.\nAnian Ruoss, Grégoire Delétang, Tim Genewein, Jordi Grau-Moya, Róbert Csordás, Mehdi Bennani,\nShane Legg, and Joel Veness. Randomized positional encodings boost length generalization of\ntransformers. In Proceedings of the 61st Annual Meeting of the Association for Computational' metadata={'source': 'pdfs/paper_2.pdf', 'page': 10}" "page_content='Linguistics (Volume 2: Short Papers) , pp. 1889–1903, Toronto, Canada, July 2023. Association for\nComputational Linguistics. doi: 10.18653/v1/2023.acl-short.161.\nJianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, and Yunfeng Liu. Roformer: Enhanced\ntransformer with rotary position embedding. arXiv preprint arXiv:2104.09864 , 2021.\n11' metadata={'source': 'pdfs/paper_2.pdf', 'page': 10}" "page_content='Yutao Sun, Li Dong, Barun Patra, Shuming Ma, Shaohan Huang, Alon Benhaim, Vishrav Chaudhary,\nXia Song, and Furu Wei. A length-extrapolatable transformer. In Proceedings of the 61st\nAnnual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pp.\n14590–14604, Toronto, Canada, July 2023. Association for Computational Linguistics. doi:\n10.18653/v1/2023.acl-long.816.\nHugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée' metadata={'source': 'pdfs/paper_2.pdf', 'page': 11}" "page_content='Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and\nefficient foundation language models. arXiv preprint arXiv:2302.13971 , 2023a.\nHugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay\nBashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation\nand fine-tuned chat models. arXiv preprint arXiv:2307.09288 , 2023b.' metadata={'source': 'pdfs/paper_2.pdf', 'page': 11}" "page_content='Szymon Tworkowski, Konrad Staniszewski, Mikołaj Pacek, Yuhuai Wu, Henryk Michalewski,\nand Piotr Miło ´s. Focused transformer: Contrastive training for context scaling. arXiv preprint\narXiv:2307.03170 , 2023.\nBen Wang and Aran Komatsuzaki. GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model.\nhttps://github.com/kingoflolz/mesh-transformer-jax , May 2021.\nWeizhi Wang, Li Dong, Hao Cheng, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, and Furu Wei.' metadata={'source': 'pdfs/paper_2.pdf', 'page': 11}" "page_content='Augmenting language models with long-term memory. arXiv preprint arXiv:2306.07174 , 2023.\nYuhuai Wu, Markus N Rabe, DeLesley Hutchins, and Christian Szegedy. Memorizing transformers.\narXiv preprint arXiv:2203.08913 , 2022.\nRowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine\nreally finish your sentence? In Proceedings of the 57th Annual Meeting of the Association for\nComputational Linguistics , pp. 4791–4800, 2019.' metadata={'source': 'pdfs/paper_2.pdf', 'page': 11}" "page_content='Azerbayev Zhangir, Ayers Edward, and Bartosz Piotrowski. Proof-pile. https://github.com/\nzhangir-azerbayev/proof-pile , 2022.\nYucheng Zhou, Tao Shen, Xiubo Geng, Chongyang Tao, Guodong Long, Can Xu, and Daxin Jiang.\nFine-grained distillation for long document retrieval. arXiv preprint arXiv:2212.10423 , 2022.\n12' metadata={'source': 'pdfs/paper_2.pdf', 'page': 11}" "page_content='Table 4: Comparison of different methods for choosing vi. We report perplexity with evaluation\ncontext window ranging from 2k to 16k. We show that these variations have relatively little impact\non the outcomes of fine-tuning.\nMethodGovReport Proof-pile\n2k 4k 8k 16k 2k 4k 8k 16k\nvi∼ U (. . .) 4.84 4.68 4.60 4.60 2.95 2.74 2.61 2.60\nvi= 0 4.85 4.72 4.64 4.68 2.96 2.75 2.63 2.61\nvi=ui 4.84 4.68 4.60 4.60 2.95 2.73 2.60 2.56\nA A BLATION OF TEXT CONTAINED WITHIN EACH CHUNK' metadata={'source': 'pdfs/paper_2.pdf', 'page': 12}" "page_content='PoSE divide the original context window into several chunks, and modify the position indices of each\nchunk to cover a wider range of relative positions in a fixed window. However, it does not impose a\nparticular constraint on the text contained within each chunk. Recall that in Equation 5, we assign the\ncontent of chunk cias below:\nci=x[vi+sti:vi+sti+li]\nIn this section, we explore several strategies for determining vi: 1) sampling from uniform distribution,' metadata={'source': 'pdfs/paper_2.pdf', 'page': 12}" "page_content='vi∼ U({vi−1, . . . , L x−Lc), which is the one used in PoSE; 2) vi= 0, which results in genuinely\ncontinuous content for the chunks; 3) vi=ui, aligning the manipulated position indices with actual\npositions in the original text. We use the same test setting as Section 4.2, extending LLaMA-7B from\n2k to 16k context. As can be seen in Table 4, we show that these variations have relatively little\nimpact on the outcomes of fine-tuning.\nB A NALYSIS OF CHUNK NUMBER N' metadata={'source': 'pdfs/paper_2.pdf', 'page': 12}" "page_content='0 2500 5000 7500 10000 12500 15000\nRelative Position0.00.20.40.60.81.0ProbabilityOriginal\n2 chunks\n3 chunks\nRandPos\nFigure 5: Coverage probability for each relative position in a single training example (2k -> 16k).\nUtilizing multiple chunks reduces coverage probability within the original [0,2,048] context window,\nwhile enhancing the coverage likelihood of relative positions in the range of [2,048,16,383]. Proba-' metadata={'source': 'pdfs/paper_2.pdf', 'page': 12}" "page_content='bility of coverage increases with the number of chunks. Pushing the chunk number to the limit is\nRandPos, utilizing 2048 chunks, capable of covering every relative position in each training example\nby expectation.\nPoSE achieves coverage of all positions within the target context window by randomly sampling\nthe chunk sizes and skipping bias terms for each training example. In this section, we explore the' metadata={'source': 'pdfs/paper_2.pdf', 'page': 12}" "page_content='probability of each relative position being covered by a training example, using a context extension\nof 2,048 to 16,384 as an example. For the unextended original version, the probability of a relative\nposition within 2048 being covered is 1, and the probability of a relative position above 2,048 being\ncovered is 0. For the cases where the number of chunks is 2, 3, or 2,048 (i.e., RandPos), we use the\n13' metadata={'source': 'pdfs/paper_2.pdf', 'page': 12}" "page_content='visit_prob_list = np.array ([0] * Lt)\niter_times = 10000\nfor_ inrange(iter_times ):\nl0 = random.randint (1, Lc-1)\nu1 = random.randint (0, Lt-Lc)\nl1 = Lc -l0\nrng1 = set(range( 1, max(l0,l1))\nrng2 = set(range(u1+ 1, u1+))\nrng= rng1 | rng2\nforx inrng:\nvisit_prob_list [x] += 1\nvisit_prob_list /= iter_timesvisit_prob_list = np.array ([0] * Lt)\niter_times = 10000\nfor_ inrange(iter_times ):\nl0 = random.randint (1, Lc-2)\nl1 = random.randint (1, Lc-l0-1)\nl2 = Lc -l0 -l1\nu1 = random.randint (0, Lt-Lc)' metadata={'source': 'pdfs/paper_2.pdf', 'page': 13}" "page_content='u2 = random.randint (u1, Lt-Lc)\nrng1 = set(range( 1, max(l0,l1,l2)))\nrng2 = set(range(u1+ 1, u1+l0+l1))\nrng3 = set(range(u2 -u1+1, u2-\nu1+l1+l2))\nrng4 = set(range(u2+l1+ 1, u2+Lc))\nrng= rng1 | rng2 | rng3 | rng4\nforx inrng:\nvisit_prob_list [x] += 1\nvisit_prob_list / = iter_timesvisit_prob_list = np.array ([0] * Lt)\niter_times = 100\nfor_ inrange(iter_times ):\ntot_pos_list = list(range(Lt))\nnew_pos_list = random.sample (tot_pos_list , Lc)\nnew_pos_list.sort ()\ndistance_rng = set()' metadata={'source': 'pdfs/paper_2.pdf', 'page': 13}" "page_content='foriinrange(0, len(new_pos_list )-1):\nforj inrange(i+ 1, len(new_pos_list )):\ndistance_rng.add (new_pos_list [j] -\nnew_pos_list [i])\nforx indistance_rng :\nvisit_prob_list [x] += 1\nvisit_prob_list /= iter_timesFigure 6: Python Code used for calculating coverage probability of each relative position in Figure 5.\nTable 5: Comparison of different chunk numbers. We report perplexity with evaluation context' metadata={'source': 'pdfs/paper_2.pdf', 'page': 13}" "page_content='window ranging from 2k to 16k. By increasing chunk number, relative positions in [2,048,16,383]\nreceive an increased chance of being trained, rendering better results for context extension. However,\nextremely large chunk number also damages model performance.\nChunk numberProof-pile\n2k 4k 8k 16k\n1 2.83 >103>103>103\n2 2.95 2.74 2.61 2.60\n3 2.93 2.72 2.60 2.59\n2048 7.26 6.83 6.76 7.73\nMonte Carlo method to estimate this coverage probability. The code used is demonstrated in Figure 6.' metadata={'source': 'pdfs/paper_2.pdf', 'page': 13}" "page_content='The estimated results are shown in Figure 5. It can be seen that PoSE reduces the coverage probability\nof positions within the original context window, while all relative positions in [2,048,16,383]\nreceives a certain increase in chance of being covered, and the probability of coverage increases as\nthe number of chunks increases. For the case where the number of chunks is equal to 2,048, the\nprobability of each relative position being covered is close to 1. With this observation, we further' metadata={'source': 'pdfs/paper_2.pdf', 'page': 13}" "page_content='compare the impact of chunk number on language modeling capability, as presented in Table 5.\nIncreasing chunk number efficiently renders better results for context extension. However, extremely\nlarge chunk number also damages model performance, due to the severe deviation from the position\nencoding structure used in pre-training phase. We believe that the choice of the number of chunks is\na trade-off between training efficiency and performance.\n14' metadata={'source': 'pdfs/paper_2.pdf', 'page': 13}" "page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nZichang Liu1Jue Wang2Tri Dao3Tianyi Zhou4Binhang Yuan5Zhao Song6Anshumali Shrivastava1\nCe Zhang5Yuandong Tian7Christopher Ré3Beidi Chen8 7\nAbstract\nLarge language models (LLMs) with hundreds of\nbillions of parameters have sparked a new wave\nof exciting AI applications. However, they are\ncomputationally expensive at inference time. Spar-\nsity is a natural approach to reduce this cost, but' metadata={'source': 'pdfs/paper_3.pdf', 'page': 0}" "page_content='existing methods either require costly retraining,\nhave to forgo LLM’s in-context learning ability, or\ndo not yield wall-clock time speedup on modern\nhardware. We hypothesize that contextual sparsity ,\nwhich are small, input-dependent sets of attention\nheads and MLP parameters that yield approxi-\nmately the same output as the dense model for a\ngiven input, can address these issues. We show that\ncontextual sparsity exists, that it can be accurately' metadata={'source': 'pdfs/paper_3.pdf', 'page': 0}" "page_content='predicted, and that we can exploit it to speed up\nLLM inference in wall-clock time without compro-\nmising LLM’s quality or in-context learning ability.\nBased on these insights, we propose DEJAVU , a\nsystem that uses a low-cost algorithm to predict\ncontextual sparsity on the fly given inputs to each\nlayer, along with an asynchronous and hardware-\naware implementation that speeds up LLM\ninference. We validate that DEJAVU can reduce the\ninference latency of OPT-175B by over 2 ×com-' metadata={'source': 'pdfs/paper_3.pdf', 'page': 0}" "page_content='pared to the state-of-the-art FasterTransformer,\nand over 6 ×compared to the widely used Hugging\nFace implementation, without compromising\nmodel quality. The code is available at https:\n//github.com/FMInference/DejaVu .\n1 Introduction\nLarge language models (LLMs), such as GPT-3, PaLM,\nand OPT have demonstrated that an immense number of\n1Rice University2Zhe Jiang University3Stanford Uni-\nversity4University of California, San Diego5ETH Zurich\n6Adobe Research7Meta AI (FAIR)8Carnegie Mellon Univer-' metadata={'source': 'pdfs/paper_3.pdf', 'page': 0}" "page_content='sity. Correspondence to: Zichang Liu , Tri Dao\n, Tianyi Zhou , Zhao Song\n, Beidi Chen .\nProceedings of the 40thInternational Conference on Machine\nLearning , Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright\n2023 by the author(s).parameters unleashes impressive performance and emergent\nin-context-learning abilities—they can perform a task by\nconditioning on input-output examples, without updating' metadata={'source': 'pdfs/paper_3.pdf', 'page': 0}" "page_content='their parameters (Bommasani et al., 2021; Liang et al.,\n2022; Brown et al., 2020; Min et al., 2022; Chan et al.,\n2022). However, they are very expensive at inference time,\nespecially for latency-sensitive applications (Pope et al.,\n2022). An ideal inference-time model should use less com-\nputation and memory while maintaining the performance\nand special abilities of pre-trained LLMs. The simplest and\nmost natural approach is sparsification or pruning, which' metadata={'source': 'pdfs/paper_3.pdf', 'page': 0}" "page_content='has a long history before the LLM era (LeCun et al., 1989).\nUnfortunately, speeding up inference-time sparse LLMs in\nwall-clock time while maintaining quality and in-context\nlearning abilities remains a challenging problem.\nWhile sparsity and pruning have been well-studied, they\nhave not seen wide adoption on LLMs due to the poor\nquality and efficiency trade-offs on modern hardware such\nas GPUs. First, it is infeasible to retrain or iteratively prune' metadata={'source': 'pdfs/paper_3.pdf', 'page': 0}" "page_content='models at the scale of hundreds of billions of parameters.\nThus, methods in iterative pruning and lottery ticket\nhypothesis (Lee et al., 2018; Frankle & Carbin, 2018) can\nonly be applied to smaller-scale models. Second, it is\nchallenging to find sparsity that preserves the in-context\nlearning ability of LLMs. Many works have shown the\neffectiveness of task-dependent pruning (Michel et al., 2019;\nBansal et al., 2022), but maintaining different models for' metadata={'source': 'pdfs/paper_3.pdf', 'page': 0}" "page_content='each task conflicts with the task independence goal of LLMs.\nLastly, it is hard to achieve wall-clock time speed-up with\nunstructured sparsity due to its well-known difficulty with\nmodern hardware (Hooker, 2021). For example, recent\ndevelopment in zero-shot pruning like SparseGPT (Frantar\n& Alistarh, 2023) finds 60% unstructured sparsity but does\nnot yet lead to any wall-clock time speedup.\nAn ideal sparsity for LLMs should (i) not require model' metadata={'source': 'pdfs/paper_3.pdf', 'page': 0}" "page_content='retraining, (ii) preserve quality and in-context learning\nability, and (iii) lead to speed-up in wall-clock time on\nmodern hardware. To achieve such demanding requirements,\nwe go beyond static sparsity in previous works (e.g., struc-\ntured/unstructured weight pruning). We instead envision\ncontextual sparsity , which are small, input-dependent\nsets of attention heads and MLP parameters that lead to\n(approximately) the same output as the full model for an' metadata={'source': 'pdfs/paper_3.pdf', 'page': 0}" "page_content='input. Inspired by the connections between LLMs, Hidden\n1arXiv:2310.17157v1 [cs.LG] 26 Oct 2023' metadata={'source': 'pdfs/paper_3.pdf', 'page': 0}" "page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\n0 20 40 60 80\nTransformer Layer 0.00 0.20 0.40 0.60 0.80 1.00Contextual SparsityOPT-175B\n(a) Contextual Sparsity\n1 2 3 4 5 6 7 8\nTheoretical Reduction0.7940.7960.7980.8000.8020.8040.8060.8080.810Accuracy\nStatic Sparsity\nNon-Contextual Sparsity\nContextual Sparsity\n(b) Accuracy-Efficiency Trade-offs\nFigure 1. (1) LLMs have up to 85% contextual sparsity for a given\ninput. (2) Contextual sparsity has much better efficiency-accuracy' metadata={'source': 'pdfs/paper_3.pdf', 'page': 1}" "page_content='trade-offs (up to 7 ×) than non-contextual sparsity or static sparsity.\nMarkov Models (Xie et al., 2022; Baum & Petrie, 1966), and\nthe classic Viterbi algorithm (Viterbi, 1967), we hypothesize\nthat for pre-trained LLMs,\ncontextual sparsity exists given any input.\nThe hypothesis, if true, would enable us to cut off specific\nattention heads and MLP parameters (structured sparsity)\non the fly for inference-time, without modifying pre-trained\nmodels. However, there are three challenges.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 1}" "page_content='Existence : It is nontrivial to verify if such contextual sparsity\nexists, and naive verification can be prohibitively expensive.\nPrediction : Even if contextual sparsity exists, it is challeng-\ning to predict the sparsity for a given input in advance.\nEfficiency : Even if the sparsity can be predicted, it might\nbe difficult to achieve end-to-end wall-clock time speedup.\nTaking OPT-175B as an example, the latency of one MLP\nblock is only 0.2 ms on an 8 ×A100 80GB machine. Without' metadata={'source': 'pdfs/paper_3.pdf', 'page': 1}" "page_content='a fast prediction and optimized implementation, the overhead\ncan easily increase the LLM latency rather than reduce it.\nIn this work, we address these challenges as follows:\nExistence : Fortunately, we verify the existence of contextual\nsparsity with a surprisingly simple approach. To achieve\nessentially the same output, contextual sparsity is on average\n85% structured sparse and thereby potentially leads to a 7×\nparameter reduction for each specific input while maintain-' metadata={'source': 'pdfs/paper_3.pdf', 'page': 1}" "page_content='ing accuracy (Figure 1(a)). During explorations of contextual\nsparsity, we make important empirical observations and build\na theoretical understanding of major components in LLMs\nthat help address the prediction and efficiency challenge.\nDeja Vu\nAttentionkMLPkPredictorPredictorPredictorAttentionk+1……Figure 2. DEJAVU uses lookahead predictors to side-step prediction\ncosts: given the input to the attention layer at block k, they (asyn-' metadata={'source': 'pdfs/paper_3.pdf', 'page': 1}" "page_content='chronously) predict the contextual sparsity for the MLP at block k,\nand given the input to the MLP at block k, they predict the sparsity\nfor the attention head at the next layer.\nPrediction : We discover that contextual sparsity depends\nnot only on individual input tokens (i.e., non-contextual\ndynamic sparsity) but also on their interactions ( contextual\ndynamic sparsity). Figure 1(b) shows that with pure dynamic\ninformation, sparsity prediction is inaccurate. Only with' metadata={'source': 'pdfs/paper_3.pdf', 'page': 1}" "page_content='token embeddings with sufficient contextual information\ncan we predict sparsity accurately. Another finding is that\ncontextual dynamic sparsity for every layer can be predicted\nbased on the “similarity” between layer parameters (head-\ns/MLP) and the output from the previous layer, which carries\nthe immediate contextual mixture of token embeddings.\nEfficiency : Because at inference time, model parameters are\nstatic, inspired by the classical nearest neighbor search (NNS)' metadata={'source': 'pdfs/paper_3.pdf', 'page': 1}" "page_content='literature and its applications in efficient deep learning, it is\npossible to formulate the above similarity-based prediction\nas an NNS problem (Indyk & Motwani, 1998b; Zhang et al.,\n2018; Chen et al., 2020a). However, as mentioned, the over-\nhead might be difficult to overcome as we would need to\nperform on-the-fly predictions before every layer. Luckily,\nwe exploit a phenomenon of LLM where token embeddings\nchange slowly across layers due to residual connections (well-' metadata={'source': 'pdfs/paper_3.pdf', 'page': 1}" "page_content='known in computer vision (He et al., 2016)). Since the inputs\nto a few consecutive layers are very similar, we can design\nan asynchronous lookahead predictor (Figure 2).\nBased on our findings, we present a system, DEJAVU , that\nexploits contextual sparsity and realizes efficient LLMs for\nlatency-sensitive applications.\n•In Section 4.1 and Section 4.2, we present a low-cost\nlearning-based algorithm to predict sparsity on the fly.\nGiven the input to a specific layer, it predicts a relevant' metadata={'source': 'pdfs/paper_3.pdf', 'page': 1}" "page_content='subset of attention (heads) or MLP parameters in the next\nlayer and only loads them for the computation.\n•In Section 4.3, we propose an asynchronous predictor (simi-\nlar to classic branch predictor (Smith, 1998)) to avoid the se-\nquential overhead. A theoretical guarantee justifies that the\n2' metadata={'source': 'pdfs/paper_3.pdf', 'page': 1}" "page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\ncross-layer design suffices for accurate sparsity prediction.\nAfter integrating hardware-aware implementation of sparse\nmatrix multiply (Section 4.4), DEJAVU (written mostly in\nPython) can reduce latency of open-source LLMs such\nas OPT-175B by over 2 ×end-to-end without quality\ndegradation compared to the state-of-the-art library Faster-\nTransformer from Nvidia (written entirely in C++/CUDA),' metadata={'source': 'pdfs/paper_3.pdf', 'page': 2}" "page_content='and over 2 ×compared to the widely used Hugging Face\nimplementation at small batch sizes. Furthermore, we show\nseveral ablations on different components of DEJAVU and\nits compatibility with quantization techniques.\n2 Related Work and Problem Formulation\nWe first briefly discuss the rich literature on efficient\ninference. Then, we introduce the latency breakdown in our\nsetting. Last, we provide a formal problem formulation.\n2.1 Quantization, Pruning, Distillation for Inference' metadata={'source': 'pdfs/paper_3.pdf', 'page': 2}" "page_content='Various relaxations have been studied for decades for\nmodel inference in machine learning. There are three main\ntechniques: quantization (Han et al., 2015; Jacob et al.,\n2018; Nagel et al., 2019; Zhao et al., 2019), pruning or\nsparsity (Molchanov et al., 2016; Liu et al., 2018; Hoefler\net al., 2021), and distillation (Hinton et al., 2015; Tang et al.,\n2019; Touvron et al., 2021). They are orthogonal areas and\nusually excel in different settings. Recently, there is active' metadata={'source': 'pdfs/paper_3.pdf', 'page': 2}" "page_content='research attempting to apply one or a combination of such\ntechniques in LLM inference (Yao et al., 2022; Park et al.,\n2022; Dettmers et al., 2022; Frantar et al., 2022; Frantar &\nAlistarh, 2023; Bansal et al., 2022; Xiao et al., 2022). More\ndiscussion is presented in Appendix A.\n2.2 LLM Inference Latency Breakdown\nThe generative procedure of LLMs consists of two phases: (i)\ntheprompt phase takes an input sequence to generate the keys\nand values (KV cache) for each transformer block of LLMs,' metadata={'source': 'pdfs/paper_3.pdf', 'page': 2}" "page_content='which is similar to the forwarding pass of LLMs training;\nand (ii) the token generation phase utilizes and updates the\nKV cache to generate tokens step by step, where the current\ntoken generation depends on previously generated tokens.\nThis paper studies the setting where the token generation\nphase easily dominates the end-to-end inference time. As\nshown in Table 1, generating a sequence of length 128 takes\nmuch longer time than processing a sequence of length 128' metadata={'source': 'pdfs/paper_3.pdf', 'page': 2}" "page_content='as prompt due to I/O latency of loading model parameters.\nIn addition, Table 2 shows that attention and MLP are both\nbottlenecks in LLMs, e.g., in 175B models, loading MLP\nparameters takes around2\n3of the total I/O and attention\nheads take the other1\n3. Further, in the tensor-parallel regime,\nthere are two communications between GPUs, one after\nthe attention block, and the other one after the MLP block.\nAs shown in Table 3, communication between GPUs takes' metadata={'source': 'pdfs/paper_3.pdf', 'page': 2}" "page_content='around 15 % token generation latency. This paper focuses on\nmaking attention and MLP more efficient. Communicationcost implies that the upper bound of such speed-up is around\n6×when skipping all transformer blocks.\nTable 1. Theoretical breakdown for prompting versus token genera-\ntion (tensor model parallelism on 8 A100-80G GPUs).\nTFLOPs I/O Compute Latency (ms) I/O Latency (ms)\nPrompting 128 44.6 330 GB 17.87 20.6\nToken Generation 128 44.6 41 TB 17.87 2600' metadata={'source': 'pdfs/paper_3.pdf', 'page': 2}" "page_content='Table 2. Theoretical breakdown for Attention block versus MLP\nblock in one transformer layer when generating one token (tensor\nmodel parallelism on 8 A100-80G GPUs).\nGFLOPs I/O (GB) Compute Latency (ms) I/O Latency (ms)\nAttention Block 1.21 1.12 0.00048 0.07\nMLP Block 2.41 2.25 0.00096 0.14\nTable 3. Latency breakdown of generating 1 token under the setting\nof batch size 1 and prompt length 128 on 8 A100-80GB.\nAll Reduce MLP Block Attention Block (ms) Others\n6 ms 19ms 13ms 2ms' metadata={'source': 'pdfs/paper_3.pdf', 'page': 2}" "page_content='2.3 Problem Formulation\nThe goal is to reduce the generation latency of LLMs by\nexploiting contextual sparsity. In the following, we formally\ndefine the sparsified attention and MLP blocks.\nSparsified MLP: There are two linear layers in one MLP\nblock, W1,W2∈Rd×4d. Denote y∈R1×das the input to the\nMLP block in the current generation step. Let each column\n(the weight of i-th neuron) of linear layers be W1\ni,W2\ni∈\nRd×1. With contextual sparsity, only a small set of them are' metadata={'source': 'pdfs/paper_3.pdf', 'page': 2}" "page_content='required for computation. Let SM⊆[4d]denote such set of\nneurons for input y. The sparsified MLP computation is\nMLP SM(y)=σ(yW1\nSM)(W2\nSM)⊤, (1)\nwhere σis the activation function, e.g., ReLU, GeLU. Note\nthat since the computation in the first linear results in sparse\nactivations, the second linear layer is also sparsified.\nSparsified Attention: LetX∈Rn×ddenote the embeddings\nof all tokens (e.g., prompts and previously generated tokens).\nLety∈R1×dbe the input to the Multi-Head-Attention' metadata={'source': 'pdfs/paper_3.pdf', 'page': 2}" "page_content='(MHA) in the current generation step. Suppose there are h\nheads. For each i∈[h], we use WK\ni,WQ\ni,WV\ni∈Rd×dhto\ndenote key, query, value projections for the i-th head, and\nWO\ni∈Rdh×dfor output projections. With contextual spar-\nsity, we denote SAas a small set of attention heads leading to\napproximately the same output as the full attention for input\ny. Following the notation system in (Alman & Song, 2023),\nsparsified MHA computation can be formally written as\nMHA SA(y)=X\ni∈SAHi(y)|{z}\n1×dhWO' metadata={'source': 'pdfs/paper_3.pdf', 'page': 2}" "page_content='i|{z}\ndh×d,\nwhere Hi(y):Rd→RdhandDi(y)∈Rcan be written as\nHi(y):=Di(y)−1exp(yWQ\ni(WK\ni)⊤X⊤)XWV\ni,(2)\nDi(y):=exp( yWQ\ni(WK\ni)⊤X⊤)1n.\nFor both MLP and Attention, given a compute budget, the\ngoal is to find SMandSAthat minimize the error between\nthe sparse approximation and full computation.\n3' metadata={'source': 'pdfs/paper_3.pdf', 'page': 2}" "page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\n0 20 40 60 80\nTransformer Layer20%40%60%80%100%% Not Activated HeadOPT-30B\nOPT-66B\nOPT-175B\n(a) Contextual sparsity in Attention Head\n0 20 40 60 80\nTransformer Layer90%92%94%96%98%100%% Not Activated NeuronsOPT-30B\nOPT-66B\nOPT-175B\n(b) Contextual sparsity in MLP Block\nFigure 3. In Figure (a), we plot the percentage of not-activated\nattention heads. By only keeping heads that yield large output' metadata={'source': 'pdfs/paper_3.pdf', 'page': 3}" "page_content='norms, we can silence over 80% attention heads for a given token.\nIn Figure (b), we plot the average sparsity we impose on MLP layers.\nWe can zero out over 95% of MLP parameters for a given token.\n3Pre-trained LLMs are Contextually Sparse\nIn this section, we present several key observations and the-\noretical understandings of sparsity in LLMs, upon which the\nDEJAVU design is based. We first test the contextual sparsity\nhypothesis and verify that contextual sparsity exists in pre-' metadata={'source': 'pdfs/paper_3.pdf', 'page': 3}" "page_content='trained LLMs in Section 3.1. Then, we build an understand-\ning of why contextual sparsity happens naturally even when\nLLMs are densely trained in Section 3.2. Finally, we present\nan observation on residual connections and explain their\nrelationship to contextual sparsity analytically in Section 3.3.\n3.1 Contextual Sparsity Hypothesis\nInspired by prior pruning literature (Molchanov et al., 2016),\nwe find a surprisingly simple method is sufficient to study and' metadata={'source': 'pdfs/paper_3.pdf', 'page': 3}" "page_content='verify our hypothesis. In this section, we describe the testing\nprocedure, observation details, and insights of this study.\nVerification: Our test is performed on OPT-175B, 66B, and\n30B models and various downstream datasets such as Open-\nBookQA (Mihaylov et al., 2018) and Wiki-Text (Merity et al.,\n2016). We find the contextual sparsity for every input exam-\nple with two forward passes of the model. In the first pass, we\nrecord a subset of parameters, specifically which attention' metadata={'source': 'pdfs/paper_3.pdf', 'page': 3}" "page_content='heads and MLP neurons yield large output norms for the input.\nIn the second pass, each input example only uses the recorded\nsubset of parameters for the computation. Surprisingly, these\ntwo forward passes lead to similar prediction or performance\non all in-context learning and language modeling tasks.Observation: Figure 3 shows that on average, we can impose\nup to 80% sparsity on attention heads and 95% sparsity on\nMLP neurons. As mentioned in Section 2, OPT-175B model' metadata={'source': 'pdfs/paper_3.pdf', 'page': 3}" "page_content='has2×MLP parameters than those of attention blocks.\nTherefore total sparsity here is around 85%. Since these are\nall structured sparsity (heads and neurons), predicting them\naccurately could potentially lead to 7×speedup.\nInsight: It is intuitive that we can find contextual sparsity in\nMLP blocks at inference time because of their activation func-\ntions, e.g., ReLU or GeLU (Kurtz et al., 2020). Similar obser-\nvations were made by (Li et al., 2022). However, it is surpris-' metadata={'source': 'pdfs/paper_3.pdf', 'page': 3}" "page_content='ing that we can find contextual sparsity in attention layers.\nNote that, finding contextual sparsity in attention is not the\nsame as head pruning. We cross-check that different exam-\nples have different contextual sparsity. Although 80% of the\nparameters are not included in the paths for a given example,\nthey might be used by other examples. Next, we will try to\nunderstand why contextual sparsity exists in attention blocks.\n3.2 Token Clustering in Attention Layers' metadata={'source': 'pdfs/paper_3.pdf', 'page': 3}" "page_content='In the previous section, we have verified that there exists\ncontextual sparsity for a given input in LLMs. In this\nsection, we try to understand the reason for such phenomena,\nespecially in attention layers. We first show an in-depth\nobservation of attention. Then we present a hypothesis that\nself-attentions are conceptually clustering algorithms. Last\nwe show analytical evidence to support this hypothesis.\nObservation: Figure 4 shows the attention map of three' metadata={'source': 'pdfs/paper_3.pdf', 'page': 3}" "page_content='different heads from the same layer for an example input.\nThe next token it should predict is “Truck”. Darker color\nrepresents higher attention scores. We observe that the\nmiddle head is a relatively uniform token-mixing head\nwhile the top and bottom ones are “heavy hitter” attention\nheads (with high attention to “like” and “shipping”).\nUnsurprisingly, only selecting heavy hitter heads but not\nuniform heads does not affect the prediction, since uniform' metadata={'source': 'pdfs/paper_3.pdf', 'page': 3}" "page_content='heads do not model or encode important token interactions.\nIn the next section, we will also explain in detail how the\ncriteria for selecting uniform attention heads and heads with\nsmall output norms are highly correlated.\nHypothesis: We hypothesize that the attention head is\nperforming mean-shift clustering (Derpanis, 2005).\nRecall the notation defined in Section 2.3. For i-th head\nat current layer, X= [x1,...,x n]⊤∈Rn×dare the token\nembeddings in the previous time steps. XWK\niandXWV\ni' metadata={'source': 'pdfs/paper_3.pdf', 'page': 3}" "page_content='are the projection of embedding. For an input embedding\ny, the output ˜yi=Hi(y), where Hi(y)is defined in Eq. 2.\nFor each i∈[h], if we let Ki(xj,y):=exp( yWQ\ni(WK\ni)⊤xj)\nmeasure the similarity between xjandy, and define\nmi(y):=P\njKi(xj,y)xjP\njKi(xj,y), then we have ˜yi=mi(y)WV\ni. Fur-\nther, if we set WV\ni=Iand consider the residue connection\nfollowed by layer norm, then in the next layer, the embedding\n4' metadata={'source': 'pdfs/paper_3.pdf', 'page': 3}" "page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nLayer LThis fruit shipping company provide different vehicle options like car and [MASK]Truck\nFigure 4. We visualize the attention scores of three different heads\nfor an exemplary sentence. Head 42 and Head 44 give heavy atten-\ntion scores on particular tokens while Head 43 is more uniform.\nˆyiof the current token becomes ˆyi= Normalize( y+ ˜yi) =\nNormalize( y+mi(y)), which has a fixed point y=γmi(y)' metadata={'source': 'pdfs/paper_3.pdf', 'page': 4}" "page_content='for any scalar γ. This iteration bears a resemblance to mean-\nshift clustering, which simply performs iteration y←mi(y)\nuntil convergence. This has an obvious fixed point y=mi(y).\nTherefore, the self-attention head can be regarded as one\nmean-shift step to push input embeddings of different tokens\ntogether, if they are already neighbors in a projection space\nspecified by WQ\ni(WK\ni)⊤. Different heads learn different\nprojection spaces to perform clustering. These dynamics' metadata={'source': 'pdfs/paper_3.pdf', 'page': 4}" "page_content='explain the precise reason why token embeddings tend to\ncluster after going through more layers, resulting in high\nattention scores among cluster members, and low scores for\nnon-members. Furthermore, the cluster patterns are different\nat different heads (More details in Appendix K).\nThe above analysis not only provides an understanding of\nwhy contextual sparsity exists naturally in pre-trained LLMs,\nbut also inspires our design of “similarity”-based sparsity\nprediction for DEJAVU in Section 4.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 4}" "page_content='3.3 Slowly Changing Embeddings across Layers\nWe first present our observation that embeddings change\nslowly across consecutive layers. Then we provide a detailed\nanalysis on the phenomenon. Finally, we show its close\nconnection with contextual sparsity. Details are in Section B.\nHigh similar embeddings in consecutive layers: In\nFigure 5(a), we show that for the same given input, the cosine\nsimilarity between embeddings or activations in two consec-' metadata={'source': 'pdfs/paper_3.pdf', 'page': 4}" "page_content='utive layers is exceptionally high on 7 different sizes of OPT\nmodels. Specifically, we collect activations from each layer\nwhile performing OPT model inference on C4 validation\nset (Raffel et al., 2019). Taking OPT-175B as an example,\nstarting from the second layer, the similarity between any\ntwo consecutive layers is around 0.99, which indicates that\nwhen an input is passed through the model, the direction of\nits embedding changes slowly. Interestingly, the most drastic' metadata={'source': 'pdfs/paper_3.pdf', 'page': 4}" "page_content='change happens in the first layer. Furthermore, we increase\nthe gap and investigate the similarity between the embedding\nat layer land at layer l+nshown in Figure 5(b). As we\nincrease the gap, the similarity decreases as expected while\nthe differences in cosine similarity between various choices\n125m1.3b 6.7b 13b 30b 66b175b0.950.960.970.980.991.00Cosine Similarity\n(a) Model Comparison\n0 20 40 60 80\nTransformer Layer0.00.20.40.60.81.0Cosine Similarityn = 1\nn = 2\nn = 4\nn = 8 (b) Across Layer' metadata={'source': 'pdfs/paper_3.pdf', 'page': 4}" "page_content='0 20 40 60 80\nTransformer Layer0500100015002000Norm||X||\n||F(X)||\n(c) Residual Around Attention\n0 20 40 60 80\nTransformer Layer05001000150020002500Norm||X||\n||F(X)|| (d) Residual Around MLP\nFigure 5. Slowly Changing Embedding. Figure (a) shows the\nmedian cosine similarity between representations at two consecutive\nlayers across all layers for different OPT models. All models show\na similarity greater than 95%. Figure (b) shows cosine similarity' metadata={'source': 'pdfs/paper_3.pdf', 'page': 4}" "page_content='stays high even a few layers apart. For the residual connection\nX′=X+F(X)inside each block, we plot the ℓ2norm of Xand\nF(X)in Figure (c) and Figure (d). ∥X∥is significantly higher than\n∥F(X)∥, which explains the slowly changing embedding.\nofnare smaller at the shallower layer. We plot the mean sim-\nilarity, and the standard deviation is indicated by the shading.\nSimilar plots on more models are presented in Appendix B.\nConnection to residuals: We verify that the high similarity' metadata={'source': 'pdfs/paper_3.pdf', 'page': 4}" "page_content='in embeddings in LLM inference is due to the residual\nconnection. We first dissect the computation graph inside\neach transformer layer to understand the cause behind\nthis phenomenon. There are two residual connections\ninside a transformer layer, one around the attention block,\nand the other one around the MLP block. The residual\nconnection can be written as X+F(X), where Fis either\nthe Multi-Head Attention or two MLP Layers. In Figure 5(c)' metadata={'source': 'pdfs/paper_3.pdf', 'page': 4}" "page_content='and Figure 5(d), indeed we can see that ∥X∥is significantly\ngreater than ∥F(X)∥, confirming that embeddings are\nchanging slowly because the residual norm is large.\nConnection to Contextual Sparsity: We take a step deeper\ntrying to understand the reason behind the large residual\nnorm with mathematical modeling. We discover that one pos-\nsible reason for small ∥F(X)∥is due to high sparsity. For the\nMLP Block, high sparsity may contribute to the small norm' metadata={'source': 'pdfs/paper_3.pdf', 'page': 4}" "page_content='ofF(X)because a large portion of outputs have small norms.\nSimilar reasoning applies to the Attention Block, and thus\na large number of attention heads yield small norm outputs.\nResidual Two Sides Bound: Besides empirical reasoning,\n5' metadata={'source': 'pdfs/paper_3.pdf', 'page': 4}" "page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nwe formally define the computation of LLMs mathematically.\nUnder our computation model, we can show that a shrinking\nproperty which is observed by our practical experiments.\nProofs are in Appendix G, H, I.\nLemma 3.1 (Informal) .Let0<ϵ1<ϵ2<1be the lower and\nupper bound of the shrinking factor. Let xbe the ybe the\noutput. We have the residual connection y=x+F(x). For\nthe MLP block F(x), we have ϵ1≤ ∥y−x∥2≤ϵ2. For the' metadata={'source': 'pdfs/paper_3.pdf', 'page': 5}" "page_content='attention block F(x), we have ϵ1≤∥y−x∥2≤ϵ2.\n4 DEJAVU\nIn this section, we present our framework for inference-time\ncontextual sparsity search for LLMs. We introduce the\nsparsity predictor for MLPs in Section 4.1 and for attention\nheads in Section 4.2. DEJAVU ’s workflow is shown in\nFigure 2. Section 4.3 discusses exploiting our observation\non LLMs to avoid the sparse prediction overhead with\ntheoretical guarantees. In Section 4.4, we present our' metadata={'source': 'pdfs/paper_3.pdf', 'page': 5}" "page_content='optimized implementation that enables end-to-end latency\nreduction. More details are presented in Section D.\n4.1 Contextual Sparsity Prediction in MLP Blocks\nAs explained in Section 2, MLP blocks are one of the major\nbottlenecks for the LLM generation (2\n3of the FLOPs and\nIOs). In this section, we discuss how we achieve wall-clock\ntime speed-up with contextual sparsity in the MLP blocks.\nChallenge Figure 3(b) shows that for a given token, the' metadata={'source': 'pdfs/paper_3.pdf', 'page': 5}" "page_content='contextual sparsity of 95% is possible. The contextual\nsparsity in the MLP block can be identified after computing\nthe activation. However, this only demonstrates the existence\nof contextual sparsity but brings no benefits in terms of\nefficiency. A fast and precise prediction is needed to exploit\ncontextual sparsity for end-to-end efficiency. The naive way\nis to select a subset of neurons randomly. Unsurprisingly,\nrandom selection fails to identify the accurate contextual' metadata={'source': 'pdfs/paper_3.pdf', 'page': 5}" "page_content='sparsity, resulting in drastic model degradation.\nA Near-Neighbor Search Problem: Recall that we verify\nthe existence of contextual sparsity by recording which\nneurons yield significant norms. Essentially, given the input,\nthe goal is to search for the neurons that have high inner prod-\nucts with the input, because the activation function “filters""\nlow activation. Thus, we formulate the contextual sparsity\nprediction of an MLP layer as the classical near-neighbor' metadata={'source': 'pdfs/paper_3.pdf', 'page': 5}" "page_content='search problem under the inner product metric.\nDefinition 4.1 (Approximate MaxIP in MLP) .Letc∈(0,1)\nandτ∈(0,1)denote two parameters. Given an n-vector\ndataset W1⊂Sd−1on a unit sphere, the objective of the\n(c,τ)-MaxIP is to construct a data structure that, given a\nquery y∈Sd−1such that max w∈W1⟨y,w⟩≥τ, it retrieves a\nvector zfromW1that satisfies ⟨y,z⟩≥c·max w∈W1⟨y,w⟩.\nRemark 4.2.OurW1(first linear layer) and y(input\nembedding) in MLP blocks can be viewed as the dataset and' metadata={'source': 'pdfs/paper_3.pdf', 'page': 5}" "page_content='query in Definition 4.1 respectively.Design The standard state-of-the-art near-neighbor search\nmethods and implementations slow down the computa-\ntion. Take OPT-175B where dis 12288 as an example.\nHNSW (Malkov & Yashunin, 2018) requires more than 10ms,\nand FAISS (Johnson et al., 2019) requires more than 4ms,\nwhile the MLP computation is only 0.2ms. The high dimen-\nsionality and complications of data structure implementation\non GPU make the search time longer than the MLP computa-' metadata={'source': 'pdfs/paper_3.pdf', 'page': 5}" "page_content='tion. Therefore, we choose a neural network classifier as our\nnear-neighbor search method to exploit the fast matrix multi-\nplication on GPU. For each MLP block, we train a small two-\nlayer fully connected network to predict contextual sparsity.\nCollecting training data is straightforward because we know\nthe contextual sparsity using dense computation. The training\nalgorithm is summarized in Algorithm 1. The sparsified com-\nputation in W1has two steps: (1) Given y, the sparsity predic-' metadata={'source': 'pdfs/paper_3.pdf', 'page': 5}" "page_content='torSPMpredicts a set SMof important neurons in weights\nW1. (2) Compute the sparsified MLP defined in Eq. equa-\ntion 1. Note here the sparsity in MLP is highly structured.\nAlgorithm 1 Sparse Predictor Training\nInput : A pre-trained LLM block with parameter set M,\ntoken embedding set at block M={xi}i∈[N], threshold t\nSparse Predictor SP\nP+←∅,P−←∅\nfori=1→Ndo\nP+←P +∪{(xi,mr)|mr∈M,m r(xi)≥t}\nP−←P−∪{(xi,mr)|mr∈M,m r(xi)