princeton-nlp commited on
Commit
a0b28ac
1 Parent(s): fe55aed

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -218
README.md CHANGED
@@ -1,236 +1,64 @@
1
  ---
2
  license: llama3
 
 
 
 
 
3
  ---
 
4
  # princeton_nlp/Llama-3-8B-ProLong-64k-Instruct
5
 
6
- Contributors: Tianyu Gao*, Alexander Wettig* (*equal contribution), Howard Yen, Danqi Chen
7
 
8
- Contact: `{tianyug, awettig}@princeton.edu`
9
 
10
- 💡 ProLong stands for **Pr**incet**o**n **Long**-Context!
 
 
 
 
 
 
 
11
 
12
- ## The ProLong Series
13
 
14
  - [princeton_nlp/Llama-3-8B-ProLong-64k-Base](https://huggingface.co/princeton-nlp/Llama-3-8B-ProLong-64k-Base)
15
  - [princeton_nlp/Llama-3-8B-ProLong-64k-Instruct](https://huggingface.co/princeton-nlp/Llama-3-8B-ProLong-64k-Instruct) ← you are here!
16
- - princeton_nlp/Llama-3-8B-ProLong-512k-Base (soon-to-come)
17
- - princeton_nlp/Llama-3-8B-ProLong-512k-Instruct (soon-to-come)
18
-
19
- ## Features
20
-
21
-
22
- - Based on [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) (original max length: 8K), we produce a long-context instruction-tuned model that can stably handle up to 64K tokens. We also have a version that can process up to 512K tokens.
23
- - This model is trained on
24
- - 20B carefully curated data mixture of short and long data (max length 64K). You can find our base model [here](princeton_nlp/Llama-3-8B-ProLong-64k-Base).
25
- - For the 512K version, we continue training the base model for 5B more tokens, with a mixture of short, long (64K), and ultra long (512K) data.
26
- - Then we fine-tuned them on [UltraChat](https://huggingface.co/datasets/stingning/ultrachat) to regain chat ability.
27
- - On a range of long-context tasks, our ProLong model achieves the top performance among models of similar sizes.
28
- - We conduct extensive ablations in our preliminary experiments, looking for the most effective way to extend LMs’ context length. We will include more details in our soon-to-come technique report.
29
-
30
-
31
- ## Benchmarking results
32
-
33
-
34
-
35
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/607f846419a5af0183d7bfb9/PPSuEMsUWIyrmrOV_88Xf.png)
36
-
37
-
38
- You can find results for more tasks and models in this [spreadsheet](https://docs.google.com/spreadsheets/d/1qGzimBE8F896p1m7_yWHnjyGX7kpEAeyaT1h2iTbNzE/edit?usp=sharing). In this detailed results, we show that our model can retain the original Llama-3's general LM performance (on tasks selected by the [HF Open LLM Leaderboard v1](https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard)). This is non-trivial in long-context fine-tuning and requires a careful selection of the fine-tuning data mixture and the training configurations.
39
-
40
-
41
- Understanding long-context performance is tricky, as there is no consensus on what’s effective long-context evaluation or how well those existing benchmarks reflect real-world use case. In this work, we curate a combination of existing and new tasks across both synthetic and natural datasets to demonstrate the strength of our model.
42
-
43
- We divide the tasks into the following categories:
44
-
45
- - **Recall**: we use synthetic Json key-value retrieval task (lost-in-the-middle, [Liu et al., 2023](https://arxiv.org/pdf/2307.03172); ∞BENCH, [Zhang et al., 2024](https://arxiv.org/pdf/2402.13718)) to test the model’s ability to retrieve arbitrary information from the context. This is a more comprehensive and reliable version of [needle-in-a-haystack](https://github.com/gkamradt/LLMTest_NeedleInAHaystack).
46
- - **Retrieval-augmented generation (RAG)**: we use existing open-domain question answering datasets in a multi-document QA format ([Liu et al., 2023](https://arxiv.org/pdf/2307.03172)). Datasets we select include NaturalQuestion ([Kwiatkowski et al., 2019](https://aclanthology.org/Q19-1026.pdf)), HotpotQA (Y[ang et al., 2018](https://arxiv.org/pdf/1809.09600)), and PopQA ([Mallen et al., 2023](https://arxiv.org/pdf/2212.10511)). The gold document is put to different positions to test “lost-in-the-middle”.
47
- - **In-context learning (ICL)**: ICL tasks have been established to evaluate long-context abilities ([Li et al., 2024](https://arxiv.org/pdf/2404.02060); [Bertsch et al., 2024](https://arxiv.org/pdf/2405.00200)). We follow [Bertsch et al., 2024](https://arxiv.org/pdf/2405.00200) and use the following five tasks: TREC, TREC-fine ([Hovy et al., 2001](https://aclanthology.org/H01-1069.pdf)), NLU ([Liu et al., 2019](https://arxiv.org/pdf/1903.05566)), Banking-77 ([Casanueva et al., 2020](https://aclanthology.org/2020.nlp4convai-1.5.pdf)), Clinc-150 ([Larson et al., 2019](https://aclanthology.org/2020.nlp4convai-1.5.pdf)).
48
- - **Reranking**: Given a query and a number of retrieved passages (by an off-the-shelf model), reranking requires the model to generate the IDs of the top-10 passages. This has been shown to be a realistic application ([Sun et al., 2023](https://arxiv.org/pdf/2304.09542)) and is also challenging, as it requires reasoning/comparison across documents. We use MSMARCO ([Bajaj et al., 2018](https://arxiv.org/pdf/1611.09268)) for this task.
49
- - **Long-document QA/summarization**: These are the most straightforward applications. We selected some of the public tasks that have the longest documents, including NarrativeQA ([Kočiský et al., 2017](https://arxiv.org/pdf/1712.07040)), Qasper ([Dasigi et al., 2021](https://arxiv.org/pdf/2105.03011)), QMSum ([Zhong et al., 2021](https://arxiv.org/pdf/2104.05938)), and Multi-LexSum ([Shen et al., 2022](https://arxiv.org/pdf/2206.10883)). As traditional evaluation metrics like rouge or F1 do not reflect the performance well, we use GPT-4o to score the model output given the gold output and the question.
50
-
51
- <details><summary>Find details about our GPT-4o rubrics for the long-document QA/summarization tasks.</summary>
52
-
53
- We use the following prompt to evaluate NarrativeQA and Qasper:
54
-
55
- ```
56
- Please act as an impartial judge and evaluate the quality of the provided answer which attempts to answer the provided question based on a provided context.
57
- Although you are not given the context, you will be given a set of correct answers that achieves full scores on all metrics, and you need to assess the provided answers using the correct answers.
58
-
59
- Below is your grading rubric:
60
-
61
- Fluency:
62
- - Score 0 (incoherent, repetitive, or incomplete): Incoherent sentences, repetitive sentences (even if not by exact words), incomplete answers, or gibberish. Note that even if the answer is coherent, if it is repetitive or incomplete, it should be given a score of 0.
63
- - Score 1 (coherent, non-repetitive answer): Coherent, non-repetitive, fluent, grammatically correct answers.
64
-
65
- Correctness:
66
- - Score 0 (Incorrect): The answer does not agree with the provided correct answers at all.
67
- - Score 1 (partly correct): Partly agree with one of the provided correct answers (for example, the question asks for a date and a person; the answer gets the date right but the person wrong).
68
- - Score 2 (correct but not fully relevant): Fully agrees with one of the provided correct answers but mentions other completely irrelevant information. Note that extra details provided in the answer, even if not mentioned in the correct answers, should NOT be seen as irrelevant as long as they are relevant to the question to a reasonable extend.
69
- - Score 3 (correct and relevant): Fully agrees with one of the provided correct answers and only provides information relevant to the question. Note that if the answer is longer than the correct answer, as long as everything in the answer is relevant to the question, it should still be given score 3. For example, if the correct answer is "the North Pole" and the answer is "They are headed for the North Pole", it should still be given a score of 3.
70
-
71
- Now, read the following question, answer, and correct answers. First think step-by-step and provide your reasoning and assessment on the answer. Then output your score in the following json format: {{"fluency": 0, "correctness": 1}}.
72
-
73
- Question: {question}
74
- Correct answers: {correct_answers}
75
- Answer: {output}
76
- ```
77
-
78
- For QMSum:
79
-
80
- ```
81
- Please act as an impartial judge and evaluate the quality of the provided summary with respect to a summarization inquiry based on a meeting transcript.
82
- Although you are not given the transcript, you will be given a reference summary that achieves full scores on all metrics, and you need to assess the provided summary using the reference one.
83
-
84
- Below is your grading rubric:
85
-
86
- Fluency:
87
- - Score 0 (incoherent, repetitive, or incomplete): Incoherent sentences, repetitive sentences (even if not by exact words), incomplete sentences, or gibberish. Note that even if the answer is coherent, if it is repetitive or incomplete, it should be given a score of 0.
88
- - Score 1 (coherent, non-repetitive answer): Coherent, non-repetitive, fluent, grammatically correct summaries.
89
-
90
- Correctness:
91
- - Score 0 (Incorrect): The summary does not agree (have overlap) with the reference summary at all.
92
- - Score 1 (<=30% correct): Covers less than 30% of the reference summary.
93
- - Score 2 (<=80% correct): Covers 30%-80% of the reference summary.
94
- - Score 3 (>80% correct, but not fully relevant): Covers more than 80% of the reference summary, but mentions other completely irrelevant information. Note that extra details provided in the summary, even if not mentioned in the reference summary, should NOT be seen as irrelevant as long as they are relevant to the query to a reasonable extend.
95
- - Score 4 (>80% correct and relevant): Almost fully agrees with the reference and only provides information relevant to the question.
96
-
97
- Now, read the following question, reference summary, and provided summary. First think step-by-step and provide your reasoning and assessment on the answer. Then output your score in the following json format: {{"fluency": 0, "correctness": 1}}.
98
-
99
- Question: {question}
100
- Reference summary: {correct_answers}
101
- Provided summary: {output}
102
- ```
103
-
104
- Multi-LexSum
105
-
106
- ```
107
- Please act as an impartial judge and evaluate the quality of the provided summary of a civil lawsuit. The summary is based on a set of legal documents, and it should contain a short description of the background, parties invovled, and the outcomes of the case.
108
- You are not given the entirety of the legal documents, but you will be given with expert-written summaries to help you evaluate the quality of the provided summary. The expert-written summaries come in two forms: the short expert summary contains all the relevant information that the provided summary should contain, and the long expert summary contains other relevant information that the provided summary may or may not contain.
109
-
110
- Below is your grading rubric:
111
-
112
- Fluency:
113
- - Score 0 (incoherent, repetitive, or incomplete): Incoherent sentences, repetitive sentences (even if not by exact words), incomplete answers, or gibberish. Note that even if the answer is coherent, if it is repetitive or incomplete, it should be given a score of 0.
114
- - Score 1 (coherent, non-repetitive answer): Coherent, non-repetitive, fluent, grammatically correct answers.
115
-
116
- Correctness:
117
- - Score 0 (Incorrect): The summary does not agree with the information provided in the expert summaries at all. The summary either does not contain any information or only contains irrelevant or incorrect information.
118
- - Examples:
119
- - Expert short summary: "This case is about an apprenticeship test that had a disparate impact on Black apprenticeship applicants."
120
- - Provided summary: "This case is about a lawsuit filed by the EEOC against a company for discrimination against Asian employees."
121
- - Score 1 (<=30% correct): Covers less than 30% of the expert short summary.
122
- - Score 2 (<=80% correct): Covers 30%-80% of the expert short summary.
123
- - Score 3 (>80% correct, but irrelevant or incorrect information found): Covers more than 80% of the expert short summary, but mentions other completely irrelevant information or incorrect information.
124
- - Irrelevant information is those that are not relelvant to the case and are not found in the expert short/long summaries.
125
- - Incorrect information is those that are factually incorrect or in conflict with the expert summaries.
126
- - Score 4 (>80% correct and relevant): The provided summary contains almost all major points found in the expert short summary and does not contain any irrelevant information.
127
-
128
- Now, read the provided summary and expert summaries, and evaluate the summary using the rubric. First think step-by-step and provide your reasoning and assessment on the answer. Then output your score in the following json format: {{"fluency": 0, "correctness": 1}}.
129
-
130
- Expert long summary: {long_expert_summary}
131
-
132
- Export short summary: {short_expert_summary}
133
-
134
- Provided summary: {output}
135
- ```
136
-
137
- We get both a fluency score (0/1) and a correctness score (0-3 for QA and 0-4 for summarization). The final score is fluency * correctness (think fluency as a “prerequisite”), normalized to 0-100.
138
- </details>
139
-
140
-
141
- Note that we are still actively developing our evaluation and the results/tasks are subject to change. We plan to include a more systematic evaluation in our technical report. The evaluation code will be available [here](https://github.com/princeton-nlp/ProLong).
142
-
143
- <details>
144
- <summary>Some more details about the evaluation.</summary>
145
-
146
- - All the evaluation context length is determined by the llama-2 tokenizer to accommodate models with smaller vocabularies.
147
- - For Json KV and RAG, we randomly sample positions of the target key-value pairs or the passages to test “lost-in-the-middle”.
148
- - For ICL, we use abstract labels (0,1,2,3…) instead of natural language labels ([Pan et al., 2023](https://arxiv.org/pdf/2305.09731)) to evaluate models’ ability to learn new tasks.
149
- - We use greedy decoding for all models/tasks.
150
-
151
- </details>
152
-
153
-
154
- ## Efficient training techniques
155
-
156
- We integrate several pieces of efficient training techniques in producing our models:
157
-
158
- - We use [FlashAttention-2 (Dao et al., 2023)](https://github.com/Dao-AILab/flash-attention)’s variable length attention and stop the attention across document boundaries. We combine variable length attention with smart batching (batching sequences with similar lengths in one step) and achieve significant speedup.
159
- - To handle Llama-3’s large vocabulary and avoid the memory overhead from materializing a huge logit matrix, we computing the cross entropy loss in chunks of 8192 tokens.
160
- - For training the 512K model, we adapt [DeepSpeed-Ulysses (Jacobs et al., 2023)](https://www.deepspeed.ai/tutorials/ds-sequence/) for sequence parallelism.
161
-
162
- ## Stage 1: long-context training
163
-
164
- We used the following data mixture and traind [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) for 20B tokens.
165
-
166
- <div style="display: flex;">
167
-
168
- <div style="flex: 3; margin-right: 5px;">
169
-
170
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/607f846419a5af0183d7bfb9/JMKoID3e6Xd7MfJtfM5Nm.png)
171
-
172
- </div>
173
- <div style="flex: 1;">
174
-
175
-
176
- | Data sources |
177
- |:----- |
178
- | [Books](https://huggingface.co/datasets/cerebras/SlimPajama-627B) Only 64k tokens |
179
- | [Textbooks](https://arxiv.org/pdf/2402.11111) Chapters concat. by book and topic |
180
- | [The Stack V1](https://huggingface.co/datasets/bigcode/the-stack) Source files concat. by repo; only 64K tokens |
181
- | [StackExchange](https://huggingface.co/datasets/cerebras/SlimPajama-627B) |
182
- | [Tulu-v2](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture) |
183
- | [Wikipedia](https://allenai.github.io/dolma/) |
184
- | [Arxiv](https://huggingface.co/datasets/cerebras/SlimPajama-627B) |
185
- | [OpenWebMath](https://huggingface.co/datasets/open-web-math/open-web-math) |
186
- | [FineWeb](https://huggingface.co/spaces/HuggingFaceFW/blogpost-fineweb-v1) |
187
- | [FineWeb-EDU](https://huggingface.co/spaces/HuggingFaceFW/blogpost-fineweb-v1) |
188
- </div>
189
- </div>
190
-
191
- For the stack v1, we concatenate all the files from the same repo (a strategy introduced by [DeepSeek Coder; Guo et al., 2024](https://github.com/deepseek-ai/DeepSeek-Coder)). For the stack v1 and books, we only keep documents that are longer than 64K.
192
-
193
- We use the following hyperparameters:
194
-
195
- | Name | Hyperparameter |
196
- |:------- |:------- |
197
- | Batch size | 4M tokens |
198
- | Peak learning rate | 1e-5 |
199
- | Scheduling | 5% warmup, cosine decay till 10% peak learning rate |
200
- | Total #tokens | 20B |
201
- | Rope theta | 8M |
202
-
203
- In our preliminary experiments, we found that
204
-
205
- - One of the challenges in long-context training is to preserve the general LM performance.
206
- - At the beginning of training, the general LM performance degrades potentially due to optimizer state warmup, data mixture mismatch, and the length extension.
207
- - We found that the right rope theta + the right data mixture + longer training + low LR help alleviate this problem.
208
- - Using variable length attention and stopping attention across document boundaries helps preserve the general LM performance.
209
- - Other warmup schemes like progressively increasing the length (similar to [LWM; Liu et al., 2024](https://arxiv.org/pdf/2402.08268)) do not seem to provide more benefit in our experiments.
210
-
211
- We will release more details of our ablations in our technical report!
212
-
213
- ## Stage 2: instruction tuning
214
-
215
- We conduct supervised fine-tuning (SFT) on our base long-context model. In our preliminary experiments, we found that using [UltraChat (Ding et al., 2023)](https://huggingface.co/datasets/stingning/ultrachat) leads to the best long-context results (among [UltraChat](https://huggingface.co/datasets/stingning/ultrachat), [Tulu (Wang et al., 2023)](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture), and [ShareGPT](https://sharegpt.com/)). Note that this only reflects the performance on our benchmark, not representing the overall quality of those datasets. The hyperparameters we used for SFT are as follows:
216
-
217
- | Name | Hyperparameter |
218
- |:------- |:------- |
219
- | Batch size | 4M tokens |
220
- | Peak learning rate | 2e-5 |
221
- | Scheduling | 5% warmup, cosine decay till 10% peak learning rate |
222
- | Total #tokens | 1B |
223
 
224
- - Synthetic data: we also experiment with several strategies to generate long, synthetic chat data, but they have not yet helped to improve upon our UltraChat-fine-tuned chat models. The synthetic data strategies we tried include (1) using a paragraph of a long book/repo to generate question-answer pairs; (2) using hierarchical methods to summarize a long book; (3) turning the previous synthetic long QA data into a RAG format.
225
 
226
  ## Citation
227
 
228
- If you find our model useful, please cite by
229
  ```bibtex
230
- @misc{gao2024prolong,
231
- title={ProLong Long-Context Language Model Series},
232
- author={Gao, Tianyu and Wettig, Alexander and Yen, Howard and Chen, Danqi},
233
- year={2024},
234
- url="https://huggingface.co/princeton-nlp/Llama-3-8B-ProLong-64k-Instruct"
235
  }
236
  ```
 
1
  ---
2
  license: llama3
3
+ datasets:
4
+ - princeton-nlp/prolong-data-64K
5
+ - HuggingFaceH4/ultrachat_200k
6
+ base_model:
7
+ - princeton-nlp/Llama-3-8B-ProLong-64k-Base
8
  ---
9
+
10
  # princeton_nlp/Llama-3-8B-ProLong-64k-Instruct
11
 
12
+ [[Paper](https://arxiv.org/pdf/2410.02660)] [[HF Collection](https://huggingface.co/collections/princeton-nlp/prolong-66c72d55d2051a86ac7bd7e4)] [[Code](https://github.com/princeton-nlp/ProLong)]
13
 
 
14
 
15
+ **ProLong** (<u>Pr</u>incet<u>o</u>n <u>long</u>-context language models) is a family of long-context models that are continued trained and supervised fine-tuned from Llama-3-8B, with a maximum context window of 512K tokens. Our [main ProLong model](https://huggingface.co/princeton-nlp/Llama-3-8B-ProLong-512k-Instruct) is one of the best-performing long-context models at the 10B scale (evaluated by [HELMET](https://github.com/princeton-nlp/helmet)).
16
+
17
+ To train this strong long-context model, we conduct thorough ablations on the long-context pre-training data, SFT data, and numerous other design choices. We demonstrate our findings in our paper, [How to Train Long-Context Language Models (Effectively)](https://arxiv.org/pdf/2410.02660).
18
+
19
+
20
+ Authors: [Tianyu Gao](https://gaotianyu.xyz/about)\*, [Alexander Wettig](https://www.cs.princeton.edu/~awettig/)\*, [Howard Yen](https://howard-yen.github.io/), [Danqi Chen](https://www.cs.princeton.edu/~danqic/) (* equal contribution)
21
+
22
+ Contact: `{tianyug, awettig}@princeton.edu`
23
 
24
+ ## The ProLong Models
25
 
26
  - [princeton_nlp/Llama-3-8B-ProLong-64k-Base](https://huggingface.co/princeton-nlp/Llama-3-8B-ProLong-64k-Base)
27
  - [princeton_nlp/Llama-3-8B-ProLong-64k-Instruct](https://huggingface.co/princeton-nlp/Llama-3-8B-ProLong-64k-Instruct) ← you are here!
28
+ - [princeton_nlp/Llama-3-8B-ProLong-512k-Base](https://huggingface.co/princeton-nlp/Llama-3-8B-ProLong-512k-Base)
29
+ - ⭐ [princeton_nlp/Llama-3-8B-ProLong-512k-Instruct](https://huggingface.co/princeton-nlp/Llama-3-8B-ProLong-512k-Instruct)
30
+
31
+ ## Model card
32
+
33
+ Here are some quick facts about our main ProLong model: [princeton-nlp/Llama-3-8B-ProLong-512k-Instruct](https://huggingface.co/princeton-nlp/Llama-3-8B-ProLong-512k-Instruct).
34
+ * Base model: [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
35
+ * Long-context continued training: 20B tokens on 64K training data ([princeton-nlp/prolong-data-64K](https://huggingface.co/datasets/princeton-nlp/prolong-data-64K)), and 20B tokens on 512K training data ([princeton-nlp/prolong-data-512K](https://huggingface.co/datasets/princeton-nlp/prolong-data-512K))
36
+ * Supervised fine-tuning (SFT): [UltraChat](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k)
37
+ * Maximum context window: 512K tokens
38
+
39
+
40
+ <p align="center" style="margin-bottom: 0;">
41
+ <img width="80%" alt="image" src="https://github.com/user-attachments/assets/c31c9671-49fe-4776-91d2-de70ffd9f9a1">
42
+ </p>
43
+ <p align="center" style="margin-top: 0; padding-top: 0;">
44
+ <em>ProLong performance on <a href="https://github.com/princeton-nlp/helmet">HELMET</a> averaged over 32K, 64K, and 128K lengths. All models are instruct models.</em>
45
+ </p>
46
+
47
+
48
+ <p align="center">
49
+ <img width="80%" alt="image" src="https://github.com/user-attachments/assets/a36a7d0f-4480-4a29-80f3-208477707fb7">
50
+ </p>
51
+ <p align="center" style="margin-top: 0;">
52
+ <em>ProLong training recipe.</em>
53
+ </p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
 
 
55
 
56
  ## Citation
57
 
 
58
  ```bibtex
59
+ @article{gao2024prolong,
60
+ title={Enabling Large Language Models to Generate Text with Citations},
61
+ author={Gao, Tianyu and Wettig, Alexander and Yen, Howard and Chen, Danqi},
62
+ year={2024},
 
63
  }
64
  ```