Add Module-to-Text description
Browse files- src/tasks_content.py +12 -1
src/tasks_content.py
CHANGED
@@ -26,7 +26,18 @@ TASKS_DESCRIPTIONS = {
|
|
26 |
**Note.** The leaderboard is sorted by ROUGE-1 metric by default.
|
27 |
""",
|
28 |
"bug_localization": "cool description for Bug Localization on Issue task",
|
29 |
-
"module_to_text": "
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
"library_usage": "cool description for Library Usage Examples Generation task",
|
31 |
"project_code_completion": "cool description for Project-level Code Completion task",
|
32 |
"bug_localization_build_logs": "cool description for Bug Localization on Build Logs task",
|
|
|
26 |
**Note.** The leaderboard is sorted by ROUGE-1 metric by default.
|
27 |
""",
|
28 |
"bug_localization": "cool description for Bug Localization on Issue task",
|
29 |
+
"module_to_text": """# Module-to-Text\n
|
30 |
+
|
31 |
+
Our Module-to-Text benchmark ๐ค [JetBrains-Research/lca-module-to-text](https://huggingface.co/datasets/JetBrains-Research/lca-module-to-text) includes 206 manually curated text files describing modules from different Python projects.
|
32 |
+
|
33 |
+
We use the following metrics for evaluation:
|
34 |
+
* [ROUGE](https://huggingface.co/spaces/evaluate-metric/rouge)
|
35 |
+
* [ChrF](https://huggingface.co/spaces/evaluate-metric/chrf)
|
36 |
+
* [BERTScore](https://huggingface.co/spaces/evaluate-metric/bertscore)
|
37 |
+
* ChatGPT-Turing-Test
|
38 |
+
|
39 |
+
For further details on the dataset and the baselines from ๐๏ธ Long Code Arena Team, refer to `module2text` folder in [our baselines repository](https://github.com/JetBrains-Research/lca-baselines) or to our preprint (TODO).
|
40 |
+
""",
|
41 |
"library_usage": "cool description for Library Usage Examples Generation task",
|
42 |
"project_code_completion": "cool description for Project-level Code Completion task",
|
43 |
"bug_localization_build_logs": "cool description for Bug Localization on Build Logs task",
|