Spaces:
Running
on
CPU Upgrade
Running
on
CPU Upgrade
AkimfromParis
commited on
Commit
•
378b362
1
Parent(s):
fa2f7aa
Update src/about.py
Browse files- src/about.py +9 -8
src/about.py
CHANGED
@@ -70,9 +70,16 @@ NUM_FEWSHOT = 0 # Change with your few shot
|
|
70 |
|
71 |
|
72 |
# Your leaderboard name
|
73 |
-
TITLE = """<h1 align="center" id="space-title">🇯🇵 Open Japanese LLM Leaderboard 🌸<br
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
74 |
|
75 |
-
BOTTOM_LOGO = """
|
76 |
<div style="display: flex; flex-direction: row; justify-content">
|
77 |
<a href="https://llm-jp.nii.ac.jp/en/">
|
78 |
<img src="https://raw.githubusercontent.com/AkimfromParis/akimfromparis/refs/heads/main/images/LLM-jp-Logo-Oct-2024.png" alt="LLM-jp" style="max-height: 100px">
|
@@ -84,12 +91,6 @@ BOTTOM_LOGO = """
|
|
84 |
<img src="https://raw.githubusercontent.com/AkimfromParis/akimfromparis/refs/heads/main/images/HuggingFace-Logo-Oct-2024.png" alt="HuggingFace" style="max-height: 100px">
|
85 |
</a>
|
86 |
"""
|
87 |
-
# What does your leaderboard evaluate?
|
88 |
-
INTRODUCTION_TEXT = """
|
89 |
-
The __Open Japanese LLM Leaderboard__ by __[LLM-Jp](https://llm-jp.nii.ac.jp/en/)__ evaluates the performance of Japanese Large Language Models (LLMs) with more than 16 tasks from classical to modern NLP tasks. The __Open Japanese LLM Leaderboard__ was built by open-source contributors of __[LLM-Jp](https://llm-jp.nii.ac.jp/en/)__, a cross-organizational project for the research and development of Japanese LLMs supported by the _National Institute of Informatics_ in Tokyo, Japan.
|
90 |
-
|
91 |
-
On the __"LLM Benchmark"__ page, the question mark **"?"** refers to the parameters that are unknown in the model card on Hugging Face. For more information about datasets, please consult the __"About"__ page or refer to the website of __[LLM-Jp](https://llm-jp.nii.ac.jp/en/)__. And on the __"Submit here!"__ page, you can evaluate the performance of your model, and be part of the leaderboard.
|
92 |
-
"""
|
93 |
|
94 |
# Which evaluations are you running? how can people reproduce what you have?
|
95 |
LLM_BENCHMARKS_TEXT = f"""
|
|
|
70 |
|
71 |
|
72 |
# Your leaderboard name
|
73 |
+
TITLE = """<h1 align="center" id="space-title">🇯🇵 Open Japanese LLM Leaderboard 🌸<br>「オープンソース大規模言語モデルのリーダーボード」</h1>"""
|
74 |
+
|
75 |
+
BOTTOM_LOGO = """<div style="display: flex; flex-direction: row; justify-content"><a href="https://llm-jp.nii.ac.jp/en/"><img src="https://raw.githubusercontent.com/AkimfromParis/akimfromparis/refs/heads/main/images/LLM-jp-Logo-Oct-2024.png" alt="LLM-jp" style="max-height: 100px"></a></div>"""
|
76 |
+
|
77 |
+
# What does your leaderboard evaluate?
|
78 |
+
INTRODUCTION_TEXT = """
|
79 |
+
The __Open Japanese LLM Leaderboard__ by __[LLM-Jp](https://llm-jp.nii.ac.jp/en/)__ evaluates the performance of Japanese Large Language Models (LLMs) with more than 16 tasks from classical to modern NLP tasks. The __Open Japanese LLM Leaderboard__ was built by open-source contributors of __[LLM-Jp](https://llm-jp.nii.ac.jp/en/)__, a cross-organizational project for the research and development of Japanese LLMs supported by the _National Institute of Informatics_ in Tokyo, Japan.
|
80 |
+
|
81 |
+
On the __"LLM Benchmark"__ page, the question mark **"?"** refers to the parameters that are unknown in the model card on Hugging Face. For more information about datasets, please consult the __"About"__ page or refer to the website of __[LLM-Jp](https://llm-jp.nii.ac.jp/en/)__. And on the __"Submit here!"__ page, you can evaluate the performance of your model, and be part of the leaderboard.
|
82 |
|
|
|
83 |
<div style="display: flex; flex-direction: row; justify-content">
|
84 |
<a href="https://llm-jp.nii.ac.jp/en/">
|
85 |
<img src="https://raw.githubusercontent.com/AkimfromParis/akimfromparis/refs/heads/main/images/LLM-jp-Logo-Oct-2024.png" alt="LLM-jp" style="max-height: 100px">
|
|
|
91 |
<img src="https://raw.githubusercontent.com/AkimfromParis/akimfromparis/refs/heads/main/images/HuggingFace-Logo-Oct-2024.png" alt="HuggingFace" style="max-height: 100px">
|
92 |
</a>
|
93 |
"""
|
|
|
|
|
|
|
|
|
|
|
|
|
94 |
|
95 |
# Which evaluations are you running? how can people reproduce what you have?
|
96 |
LLM_BENCHMARKS_TEXT = f"""
|