AkimfromParis commited on
Commit
8bd7a58
1 Parent(s): 12febe1

Introduction Text English V1.0 by Akim

Browse files
Files changed (1) hide show
  1. src/about.py +9 -4
src/about.py CHANGED
@@ -71,13 +71,18 @@ NUM_FEWSHOT = 0 # Change with your few shot
71
 
72
 
73
  # Your leaderboard name
74
- TITLE = """<h1 align="center" id="space-title">LLM-JP leaderboard</h1>"""
75
 
76
  # What does your leaderboard evaluate?
77
  INTRODUCTION_TEXT = """
78
- This Leader-Board automatically evaluates large-scale Japanese language models across multiple datasets.
79
- Check here for supported evaluation methods.
80
- https://github.com/llm-jp/llm-jp-eval/blob/main/DATASET.md
 
 
 
 
 
81
  """
82
 
83
  # Which evaluations are you running? how can people reproduce what you have?
 
71
 
72
 
73
  # Your leaderboard name
74
+ TITLE = """<h1 align="center" id="space-title">Open Japanese LLM Leaderboard by LLM-Jp</h1>"""
75
 
76
  # What does your leaderboard evaluate?
77
  INTRODUCTION_TEXT = """
78
+ :jp: The Open Japanese LLM Leaderboard :cherry_blossom: by [LLM-Jp](https://llm-jp.nii.ac.jp/en/) evaluates the performance of Japanese Large Language Models (LLMs).
79
+
80
+ This leaderboard was built by [LLM-Jp](https://llm-jp.nii.ac.jp/en/), a cross-organizational project for the research and development of Japanese large language models (LLMs). Organized by the National Institute of Informatics, LLM-jp aims to develop open-source and strong Japanese LLMs, and as of this writing, more than 1,500 participants from academia and industry are working together for this purpose.
81
+ When you submit a model on the "Submit here!" page, it is automatically evaluated on a set of benchmarks. Before you submit it, please describe in details your LLM in the Hugging Face 's model card.
82
+ This Open Japanese LLM Leaderboard assesses language understanding, of Japanese LLMs with more than 52 benchmarks from classical and modern NLP tasks such as Natural language inference, Question Answering, Machine Translation, Code Generation, Mathematical reasoning, Summarization, etc. For more information about benchmark, datasets, and the license, please consult the "About" page or directly to the evaluation tool, llm-jp-eval.
83
+
84
+ For more details, please refer to the website of [LLM-Jp](https://llm-jp.nii.ac.jp/en/)
85
+
86
  """
87
 
88
  # Which evaluations are you running? how can people reproduce what you have?