Update constants.py
Browse files- constants.py +8 -4
constants.py
CHANGED
@@ -36,10 +36,14 @@ LEADERBORAD_INTRODUCTION = """# SEED-Bench Leaderboard
|
|
36 |
"""
|
37 |
|
38 |
SUBMIT_INTRODUCTION = """# Submit Precautions
|
39 |
-
1. Attain
|
40 |
-
|
41 |
-
|
42 |
-
|
|
|
|
|
|
|
|
|
43 |
"""
|
44 |
|
45 |
TABLE_INTRODUCTION = """In the table below, we summarize each task performance of all the models.
|
|
|
36 |
"""
|
37 |
|
38 |
SUBMIT_INTRODUCTION = """# Submit Precautions
|
39 |
+
1. Attain JSON file from our [github repository](https://github.com/AILab-CVC/SEED-Bench) after evaluation. For example, you can obtain InstructBLIP's JSON file as results/results.json after running
|
40 |
+
```shell
|
41 |
+
python eval.py --model instruct_blip --anno_path SEED-Bench.json --output-dir results
|
42 |
+
```
|
43 |
+
2. If you want to revise a model, please ensure 'Model Name Revision' align with what's in the leaderboard. For example, if you want to modify InstructBLIP's evaluation result, you need to fill in 'InstructBLIP' in 'Revision Model Name'.
|
44 |
+
3. Please ensure the right link for each submission. Everyone could go to the model's repository through the model name in the leaderboard.
|
45 |
+
4. If you don't want to evaluate all dimensions, not evaluated dimension performance, and its corresponding average performance will be set to 0.
|
46 |
+
5. After clicking 'Submit Eval', you can click 'Refresh' to obtain the latest leaderboard.
|
47 |
"""
|
48 |
|
49 |
TABLE_INTRODUCTION = """In the table below, we summarize each task performance of all the models.
|