Update README.md
Browse files
README.md
CHANGED
@@ -70,14 +70,16 @@ Decomposed Requirements Following Ratio(DRFR) is the metric to evaluate how LLMs
|
|
70 |
This metric calculates the average accuracy across answers to the decomposed questions for each instruction.
|
71 |
The following is the summary of the model performance on our dataset.
|
72 |
|
73 |
-
| Model
|
74 |
-
|
75 |
-
| **claude-3-opus-20240229**
|
76 |
-
| **
|
77 |
-
| **gpt-
|
78 |
-
| **
|
79 |
-
| **
|
80 |
-
| **hpx003**
|
|
|
|
|
81 |
|
82 |
- `H_DRFR`: The accuracy of model responses as evaluated by the human expert
|
83 |
- `A_DRFR`: The accuracy of model responses automatically evaluated by GPT-4 as employing the capability of LLM-as-a-judge
|
|
|
70 |
This metric calculates the average accuracy across answers to the decomposed questions for each instruction.
|
71 |
The following is the summary of the model performance on our dataset.
|
72 |
|
73 |
+
| Model | H_DRFR | A_DRFR | Alignment |
|
74 |
+
|------------------------------ |-------- |--------|-----------|
|
75 |
+
| **claude-3-opus-20240229** | **0.854** | 0.850 | 87% |
|
76 |
+
| **gpt-4-turbo-2024-04-09** | 0.850 | 0.880 | 87% |
|
77 |
+
| **gpt-4-0125-preview** | 0.824 | 0.824 | 83% |
|
78 |
+
| **gemini-1.5-pro** | 0.773 | 0.811 | 83% |
|
79 |
+
| **meta-llama/Meta-Llama-3-70B-Instruct-** | 0.747 | 0.863 | 84% |
|
80 |
+
| **hpx003** | 0.691 | 0.738 | 83% |
|
81 |
+
| **gpt-3.5-turbo-0125** | 0.678 | 0.734 | 82% |
|
82 |
+
| **yanolja/EEVE-Korean-Instruct-10.8B-v1.0** | 0.597 | 0.730 | 79% |
|
83 |
|
84 |
- `H_DRFR`: The accuracy of model responses as evaluated by the human expert
|
85 |
- `A_DRFR`: The accuracy of model responses automatically evaluated by GPT-4 as employing the capability of LLM-as-a-judge
|