陈俊杰
commited on
Commit
•
9a263a1
1
Parent(s):
5706545
fontSize
Browse files
app.py
CHANGED
@@ -124,7 +124,7 @@ st.markdown("""
|
|
124 |
<style>
|
125 |
/* 应用到所有的Markdown渲染文本 */
|
126 |
.main-text {
|
127 |
-
font-size:
|
128 |
line-height: 1.6;
|
129 |
}
|
130 |
</style>
|
@@ -134,16 +134,14 @@ st.markdown("""
|
|
134 |
if page == "Introduction":
|
135 |
st.header("Introduction")
|
136 |
st.markdown("""
|
137 |
-
<div class='main-text'>
|
138 |
<p class='main-text'>The Automatic Evaluation of LLMs (AEOLLM) task is a new core task in <a href="http://research.nii.ac.jp/ntcir/ntcir-18">NTCIR-18</a> to support in-depth research on large language models (LLMs) evaluation. As LLMs grow popular in both fields of academia and industry, how to effectively evaluate the capacity of LLMs becomes an increasingly critical but still challenging issue. Existing methods can be divided into two types: manual evaluation, which is expensive, and automatic evaluation, which faces many limitations including the task format (the majority belong to multiple-choice questions) and evaluation criteria (occupied by reference-based metrics). To advance the innovation of automatic evaluation, we proposed the Automatic Evaluation of LLMs (AEOLLM) task which focuses on generative tasks and encourages reference-free methods. Besides, we set up diverse subtasks such as summary generation, non-factoid question answering, text expansion, and dialogue generation to comprehensively test different methods. We believe that the AEOLLM task will facilitate the development of the LLMs community.</p>
|
139 |
-
</div>
|
140 |
""", unsafe_allow_html=True)
|
141 |
|
142 |
elif page == "Methodology":
|
143 |
st.header("Methodology")
|
144 |
st.image("asserts/method.svg", use_column_width=True)
|
145 |
st.markdown("""
|
146 |
-
<ol>
|
147 |
<li>First, we choose four subtasks as shown in the table below:</li>
|
148 |
<table>
|
149 |
<thead>
|
@@ -185,20 +183,20 @@ elif page == "Methodology":
|
|
185 |
elif page == "Datasets":
|
186 |
st.header("Datasets")
|
187 |
st.markdown("""
|
188 |
-
<p>A brief description of the specific dataset we used, along with the original download link, is provided below:</p>
|
189 |
-
<ul>
|
190 |
<li><strong>Summary Generation (SG): <a href="https://huggingface.co/datasets/EdinburghNLP/xsum">Xsum</a></strong>: A real-world single document news summary dataset collected from online articles by the British Broadcasting Corporation (BBC) and contains over 220 thousand news documents.</li>
|
191 |
<li><strong>Non-Factoid QA (NFQA): <a href="https://github.com/Lurunchik/NF-CATS">NF_CATS</a></strong>: A dataset contains examples of 12k natural questions divided into eight categories.</li>
|
192 |
<li><strong>Text Expansion (TE): <a href="https://huggingface.co/datasets/euclaise/writingprompts">WritingPrompts</a></strong>: A large dataset of 300K human-written stories paired with writing prompts from an online forum.</li>
|
193 |
<li><strong>Dialogue Generation (DG): <a href="https://huggingface.co/datasets/daily_dialog">DailyDialog</a></strong>: A high-quality dataset of 13k multi-turn dialogues. The language is human-written and less noisy.</li>
|
194 |
</ul>
|
195 |
-
<p>For your convenience, we have released <strong>the training set</strong> (with human-annotated results) and <strong>the test set</strong> (without human-annotated results) on <a href="https://huggingface.co/datasets/THUIR/AEOLLM">https://huggingface.co/datasets/THUIR/AEOLLM</a>, which you can easily download.</p>
|
196 |
""",unsafe_allow_html=True)
|
197 |
|
198 |
elif page == "Important Dates":
|
199 |
st.header("Important Dates")
|
200 |
st.markdown("""
|
201 |
-
<p><em>All deadlines are at 11:59pm in the Anywhere on Earth (AOE) timezone.</em><br />
|
202 |
<span class="event"><strong>Kickoff Event</strong>:</span> <span class="date">March 29, 2024</span><br />
|
203 |
<span class="event"><strong>Dataset Release</strong>:</span> <span class="date">👉May 1, 2024</span><br />
|
204 |
<span class="event"><strong>System Output Submission Deadline</strong>:</span> <span class="date">Jan 15, 2025</span><br />
|
@@ -211,6 +209,7 @@ elif page == "Important Dates":
|
|
211 |
elif page == "Evaluation Measures":
|
212 |
st.header("Evaluation Measures")
|
213 |
st.markdown("""
|
|
|
214 |
- **Acc(Accuracy):** The proportion of identical preference results between the model and human annotations. Specifically, we first convert individual scores (ranks) into pairwise preferences and then calculate consistency with human annotations.
|
215 |
- **Kendall's tau:** Measures the ordinal association between two ranked variables.
|
216 |
|
@@ -229,13 +228,14 @@ elif page == "Evaluation Measures":
|
|
229 |
where:
|
230 |
- $d_i$ is the difference between the ranks of corresponding elements in the two lists,
|
231 |
- $n$ is the number of elements.
|
|
|
232 |
""",unsafe_allow_html=True)
|
233 |
elif page == "Data and File format":
|
234 |
st.header("Data and File format")
|
235 |
st.markdown("""
|
236 |
-
<p>We will be following a similar format as the ones used by most <strong>TREC submissions</strong>, which is repeated below. White space is used to separate columns. The width of the columns in the format is not important, but it is important to have exactly five columns per line with at least one space between the columns.</p>
|
237 |
-
<p><strong>taskId questionId answerId score rank</strong></p>
|
238 |
-
<ol>
|
239 |
<li>the first column is the taskeId (index different tasks)</li>
|
240 |
<li>the second column is questionId (index different questions in the same task)</li>
|
241 |
<li>the third column is answerId (index the answer provided by different LLMs to the same question)</li>
|
@@ -252,11 +252,13 @@ elif page == "LeaderBoard":
|
|
252 |
st.header("LeaderBoard")
|
253 |
# # 描述
|
254 |
st.markdown("""
|
|
|
255 |
This leaderboard is used to show the performance of the **automatic evaluation methods of LLMs** submitted by the **AEOLLM team** on four tasks:
|
256 |
- Dialogue Generation (DG)
|
257 |
- Text Expansion (TE)
|
258 |
- Summary Generation (SG)
|
259 |
- Non-Factoid QA (NFQA)
|
|
|
260 |
""", unsafe_allow_html=True)
|
261 |
# 创建示例数据
|
262 |
|
@@ -324,11 +326,12 @@ This leaderboard is used to show the performance of the **automatic evaluation m
|
|
324 |
elif page == "Organisers":
|
325 |
st.header("Organisers")
|
326 |
st.markdown("""
|
|
|
327 |
<em>Yiqun Liu</em> [[email protected]] (Tsinghua University)<br />
|
328 |
<em>Qingyao Ai</em> [[email protected]] (Tsinghua University)<br />
|
329 |
<em>Junjie Chen</em> [[email protected]] (Tsinghua University) <br />
|
330 |
<em>Zhumin Chu</em> [[email protected]] (Tsinghua University)<br />
|
331 |
-
<em>Haitao Li</em> [[email protected]] (Tsinghua University)""",unsafe_allow_html=True)
|
332 |
elif page == "References":
|
333 |
st.header("References")
|
334 |
st.markdown("""TAB""")
|
|
|
124 |
<style>
|
125 |
/* 应用到所有的Markdown渲染文本 */
|
126 |
.main-text {
|
127 |
+
font-size: 18px;
|
128 |
line-height: 1.6;
|
129 |
}
|
130 |
</style>
|
|
|
134 |
if page == "Introduction":
|
135 |
st.header("Introduction")
|
136 |
st.markdown("""
|
|
|
137 |
<p class='main-text'>The Automatic Evaluation of LLMs (AEOLLM) task is a new core task in <a href="http://research.nii.ac.jp/ntcir/ntcir-18">NTCIR-18</a> to support in-depth research on large language models (LLMs) evaluation. As LLMs grow popular in both fields of academia and industry, how to effectively evaluate the capacity of LLMs becomes an increasingly critical but still challenging issue. Existing methods can be divided into two types: manual evaluation, which is expensive, and automatic evaluation, which faces many limitations including the task format (the majority belong to multiple-choice questions) and evaluation criteria (occupied by reference-based metrics). To advance the innovation of automatic evaluation, we proposed the Automatic Evaluation of LLMs (AEOLLM) task which focuses on generative tasks and encourages reference-free methods. Besides, we set up diverse subtasks such as summary generation, non-factoid question answering, text expansion, and dialogue generation to comprehensively test different methods. We believe that the AEOLLM task will facilitate the development of the LLMs community.</p>
|
|
|
138 |
""", unsafe_allow_html=True)
|
139 |
|
140 |
elif page == "Methodology":
|
141 |
st.header("Methodology")
|
142 |
st.image("asserts/method.svg", use_column_width=True)
|
143 |
st.markdown("""
|
144 |
+
<ol class='main-text'>
|
145 |
<li>First, we choose four subtasks as shown in the table below:</li>
|
146 |
<table>
|
147 |
<thead>
|
|
|
183 |
elif page == "Datasets":
|
184 |
st.header("Datasets")
|
185 |
st.markdown("""
|
186 |
+
<p class='main-text'>A brief description of the specific dataset we used, along with the original download link, is provided below:</p>
|
187 |
+
<ul class='main-text'>
|
188 |
<li><strong>Summary Generation (SG): <a href="https://huggingface.co/datasets/EdinburghNLP/xsum">Xsum</a></strong>: A real-world single document news summary dataset collected from online articles by the British Broadcasting Corporation (BBC) and contains over 220 thousand news documents.</li>
|
189 |
<li><strong>Non-Factoid QA (NFQA): <a href="https://github.com/Lurunchik/NF-CATS">NF_CATS</a></strong>: A dataset contains examples of 12k natural questions divided into eight categories.</li>
|
190 |
<li><strong>Text Expansion (TE): <a href="https://huggingface.co/datasets/euclaise/writingprompts">WritingPrompts</a></strong>: A large dataset of 300K human-written stories paired with writing prompts from an online forum.</li>
|
191 |
<li><strong>Dialogue Generation (DG): <a href="https://huggingface.co/datasets/daily_dialog">DailyDialog</a></strong>: A high-quality dataset of 13k multi-turn dialogues. The language is human-written and less noisy.</li>
|
192 |
</ul>
|
193 |
+
<p class='main-text'>For your convenience, we have released <strong>the training set</strong> (with human-annotated results) and <strong>the test set</strong> (without human-annotated results) on <a href="https://huggingface.co/datasets/THUIR/AEOLLM">https://huggingface.co/datasets/THUIR/AEOLLM</a>, which you can easily download.</p>
|
194 |
""",unsafe_allow_html=True)
|
195 |
|
196 |
elif page == "Important Dates":
|
197 |
st.header("Important Dates")
|
198 |
st.markdown("""
|
199 |
+
<p class='main-text'><em>All deadlines are at 11:59pm in the Anywhere on Earth (AOE) timezone.</em><br />
|
200 |
<span class="event"><strong>Kickoff Event</strong>:</span> <span class="date">March 29, 2024</span><br />
|
201 |
<span class="event"><strong>Dataset Release</strong>:</span> <span class="date">👉May 1, 2024</span><br />
|
202 |
<span class="event"><strong>System Output Submission Deadline</strong>:</span> <span class="date">Jan 15, 2025</span><br />
|
|
|
209 |
elif page == "Evaluation Measures":
|
210 |
st.header("Evaluation Measures")
|
211 |
st.markdown("""
|
212 |
+
<div class='main-text'>
|
213 |
- **Acc(Accuracy):** The proportion of identical preference results between the model and human annotations. Specifically, we first convert individual scores (ranks) into pairwise preferences and then calculate consistency with human annotations.
|
214 |
- **Kendall's tau:** Measures the ordinal association between two ranked variables.
|
215 |
|
|
|
228 |
where:
|
229 |
- $d_i$ is the difference between the ranks of corresponding elements in the two lists,
|
230 |
- $n$ is the number of elements.
|
231 |
+
</div>
|
232 |
""",unsafe_allow_html=True)
|
233 |
elif page == "Data and File format":
|
234 |
st.header("Data and File format")
|
235 |
st.markdown("""
|
236 |
+
<p class='main-text'>We will be following a similar format as the ones used by most <strong>TREC submissions</strong>, which is repeated below. White space is used to separate columns. The width of the columns in the format is not important, but it is important to have exactly five columns per line with at least one space between the columns.</p>
|
237 |
+
<p class='main-text'><strong>taskId questionId answerId score rank</strong></p>
|
238 |
+
<ol class='main-text'>
|
239 |
<li>the first column is the taskeId (index different tasks)</li>
|
240 |
<li>the second column is questionId (index different questions in the same task)</li>
|
241 |
<li>the third column is answerId (index the answer provided by different LLMs to the same question)</li>
|
|
|
252 |
st.header("LeaderBoard")
|
253 |
# # 描述
|
254 |
st.markdown("""
|
255 |
+
<div class='main-text'>
|
256 |
This leaderboard is used to show the performance of the **automatic evaluation methods of LLMs** submitted by the **AEOLLM team** on four tasks:
|
257 |
- Dialogue Generation (DG)
|
258 |
- Text Expansion (TE)
|
259 |
- Summary Generation (SG)
|
260 |
- Non-Factoid QA (NFQA)
|
261 |
+
</div>
|
262 |
""", unsafe_allow_html=True)
|
263 |
# 创建示例数据
|
264 |
|
|
|
326 |
elif page == "Organisers":
|
327 |
st.header("Organisers")
|
328 |
st.markdown("""
|
329 |
+
<p class='main-text'>
|
330 |
<em>Yiqun Liu</em> [[email protected]] (Tsinghua University)<br />
|
331 |
<em>Qingyao Ai</em> [[email protected]] (Tsinghua University)<br />
|
332 |
<em>Junjie Chen</em> [[email protected]] (Tsinghua University) <br />
|
333 |
<em>Zhumin Chu</em> [[email protected]] (Tsinghua University)<br />
|
334 |
+
<em>Haitao Li</em> [[email protected]] (Tsinghua University)</p>""",unsafe_allow_html=True)
|
335 |
elif page == "References":
|
336 |
st.header("References")
|
337 |
st.markdown("""TAB""")
|