AkimfromParis commited on
Commit
32e46f5
β€’
1 Parent(s): 8ad23c3

Update Introduction Text shorter

Browse files
Files changed (1) hide show
  1. src/about.py +6 -7
src/about.py CHANGED
@@ -69,24 +69,23 @@ NUM_FEWSHOT = 0 # Change with your few shot
69
  # ---------------------------------------------------
70
 
71
 
72
-
73
  # Your leaderboard name
74
- TITLE = """<h1 align="center" id="space-title">Open Japanese LLM Leaderboard by LLM-Jp</h1>"""
75
 
76
  # What does your leaderboard evaluate?
77
  INTRODUCTION_TEXT = """
78
- πŸ‡―πŸ‡΅ The Open Japanese LLM Leaderboard 🌸 by [LLM-Jp](https://llm-jp.nii.ac.jp/en/) evaluates the performance of Japanese Large Language Models (LLMs).
79
 
80
- This leaderboard was built by contributors of [LLM-Jp](https://llm-jp.nii.ac.jp/en/), a cross-organizational project for the research and development of Japanese large language models (LLMs). Organized by the National Institute of Informatics, LLM-jp aims to develop open-source and strong Japanese LLMs with more than 1,500 participants from academia and industry as of July 2024.
81
 
82
- This Open Japanese LLM Leaderboard assesses language understanding, of Japanese LLMs with more than 52 benchmarks from classical to modern NLP tasks such as Natural language inference, Question Answering, Machine Translation, Code Generation, Mathematical reasoning, Summarization, etc. When you submit a model on the "Submit here!" page, it is automatically evaluated on a set of benchmarks. For more information about benchmarks, please consult the "About" page. For more details, please refer to the website of [LLM-Jp](https://llm-jp.nii.ac.jp/en/)
83
 
84
  """
85
 
86
  # Which evaluations are you running? how can people reproduce what you have?
87
  LLM_BENCHMARKS_TEXT = f"""
88
  ## How it works
89
- πŸ“ˆ We evaluate Japanese Large Language Models on 52 key benchmarks leveraging our evaluation tool [llm-jp-eval](https://github.com/llm-jp/llm-jp-eval), a unified framework to evaluate Japanese LLMs on various evaluation tasks.
90
 
91
  **NLI (Natural Language Inference)**
92
 
@@ -100,7 +99,7 @@ LLM_BENCHMARKS_TEXT = f"""
100
 
101
  * `JSICK`, Japanese Sentences Involving Compositional Knowledge [Source](https://github.com/verypluming/JSICK) (License CC BY-SA 4.0)
102
 
103
- **NQA (Question Answering)**
104
 
105
  * `JEMHopQA`, Japanese Explainable Multi-hop Question Answering [Source](https://github.com/aiishii/JEMHopQA) (License CC BY-SA 4.0)
106
 
 
69
  # ---------------------------------------------------
70
 
71
 
 
72
  # Your leaderboard name
73
+ TITLE = """<h1 align="center" id="space-title">Open Japanese LLM Leaderboard</h1>"""
74
 
75
  # What does your leaderboard evaluate?
76
  INTRODUCTION_TEXT = """
77
+ πŸ‡―πŸ‡΅ The **Open Japanese LLM Leaderboard** 🌸 by [LLM-Jp](https://llm-jp.nii.ac.jp/en/) evaluates the performance of Japanese Large Language Models (LLMs).
78
 
79
+ The **Open Japanese LLM Leaderboard** was built by open-source contributors of [LLM-Jp](https://llm-jp.nii.ac.jp/en/), a cross-organizational project for the research and development of Japanese large language models (LLMs) organized by the _National Institute of Informatics_ with more than 1,500 participants from academia and industry. The Open Japanese LLM Leaderboard assesses language understanding of Japanese LLMs with more than 51 benchmarks from classical to modern NLP tasks such as Natural language inference, Question Answering, Machine Translation, Code Generation, Mathematical reasoning, Summarization, etc.
80
 
81
+ When you submit a model on the "Submit here!" page, it is automatically evaluated on a set of benchmarks. For more information, please consult the "About" page or refer to the website of [LLM-Jp](https://llm-jp.nii.ac.jp/en/)
82
 
83
  """
84
 
85
  # Which evaluations are you running? how can people reproduce what you have?
86
  LLM_BENCHMARKS_TEXT = f"""
87
  ## How it works
88
+ πŸ“ˆ We evaluate Japanese Large Language Models on 51 key benchmarks leveraging our evaluation tool [llm-jp-eval](https://github.com/llm-jp/llm-jp-eval), a unified framework to evaluate Japanese LLMs on various evaluation tasks.
89
 
90
  **NLI (Natural Language Inference)**
91
 
 
99
 
100
  * `JSICK`, Japanese Sentences Involving Compositional Knowledge [Source](https://github.com/verypluming/JSICK) (License CC BY-SA 4.0)
101
 
102
+ **QA (Question Answering)**
103
 
104
  * `JEMHopQA`, Japanese Explainable Multi-hop Question Answering [Source](https://github.com/aiishii/JEMHopQA) (License CC BY-SA 4.0)
105