File size: 5,309 Bytes
f2f159c
 
 
 
 
480d990
f2f159c
5591e11
 
480d990
b1c56b3
 
 
 
6fb5374
 
 
 
c62bc02
 
 
 
 
 
 
 
 
 
 
 
 
6fb5374
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
---
library_name: transformers
tags: []
---

# SOLAR-10.7b-Instruct-dpo

![orca-header](orca-header.png)

This model is a finetune of  upstage/SOLAR-10.7B-Instruct-v1.0 using Intel/orca_dpo_pairs

## Chat Template

This model follows the chatML chat template.


## Evaluations

----Benchmark Complete----
+ 2024-01-24 02:03:32
+ Time taken: 78.1 mins
+ Prompt Format: ChatML
+ Model: macadeliccc/SOLAR-10.7b-Instruct-dpo
+ Score (v2): 72.96
+ Parseable: 166.0
---------------
Batch completed
Time taken: 78.1 mins
---------------


|                                         Model                                         |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|---------------------------------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[SOLAR-10.7b-Instruct-dpo](https://huggingface.co/macadeliccc/SOLAR-10.7b-Instruct-dpo)|  47.57|   74.3|     72.73|   45.76|  60.09|

### AGIEval
|             Task             |Version| Metric |Value|   |Stderr|
|------------------------------|------:|--------|----:|---|-----:|
|agieval_aqua_rat              |      0|acc     |27.56|±  |  2.81|
|                              |       |acc_norm|26.77|±  |  2.78|
|agieval_logiqa_en             |      0|acc     |41.63|±  |  1.93|
|                              |       |acc_norm|41.32|±  |  1.93|
|agieval_lsat_ar               |      0|acc     |25.22|±  |  2.87|
|                              |       |acc_norm|24.35|±  |  2.84|
|agieval_lsat_lr               |      0|acc     |54.12|±  |  2.21|
|                              |       |acc_norm|54.31|±  |  2.21|
|agieval_lsat_rc               |      0|acc     |68.77|±  |  2.83|
|                              |       |acc_norm|69.14|±  |  2.82|
|agieval_sat_en                |      0|acc     |79.13|±  |  2.84|
|                              |       |acc_norm|79.13|±  |  2.84|
|agieval_sat_en_without_passage|      0|acc     |44.66|±  |  3.47|
|                              |       |acc_norm|44.66|±  |  3.47|
|agieval_sat_math              |      0|acc     |40.45|±  |  3.32|
|                              |       |acc_norm|40.91|±  |  3.32|

Average: 47.57%

### GPT4All
|    Task     |Version| Metric |Value|   |Stderr|
|-------------|------:|--------|----:|---|-----:|
|arc_challenge|      0|acc     |60.49|±  |  1.43|
|             |       |acc_norm|63.74|±  |  1.40|
|arc_easy     |      0|acc     |82.07|±  |  0.79|
|             |       |acc_norm|79.92|±  |  0.82|
|boolq        |      1|acc     |88.56|±  |  0.56|
|hellaswag    |      0|acc     |68.47|±  |  0.46|
|             |       |acc_norm|86.06|±  |  0.35|
|openbookqa   |      0|acc     |36.20|±  |  2.15|
|             |       |acc_norm|46.60|±  |  2.23|
|piqa         |      0|acc     |79.38|±  |  0.94|
|             |       |acc_norm|79.71|±  |  0.94|
|winogrande   |      0|acc     |75.53|±  |  1.21|

Average: 74.3%

### TruthfulQA
|    Task     |Version|Metric|Value|   |Stderr|
|-------------|------:|------|----:|---|-----:|
|truthfulqa_mc|      1|mc1   |57.77|±  |  1.73|
|             |       |mc2   |72.73|±  |  1.49|

Average: 72.73%

### Bigbench
|                      Task                      |Version|       Metric        |Value|   |Stderr|
|------------------------------------------------|------:|---------------------|----:|---|-----:|
|bigbench_causal_judgement                       |      0|multiple_choice_grade|55.26|±  |  3.62|
|bigbench_date_understanding                     |      0|multiple_choice_grade|62.87|±  |  2.52|
|bigbench_disambiguation_qa                      |      0|multiple_choice_grade|46.51|±  |  3.11|
|bigbench_geometric_shapes                       |      0|multiple_choice_grade|25.63|±  |  2.31|
|                                                |       |exact_str_match      | 0.00|±  |  0.00|
|bigbench_logical_deduction_five_objects         |      0|multiple_choice_grade|28.00|±  |  2.01|
|bigbench_logical_deduction_seven_objects        |      0|multiple_choice_grade|20.57|±  |  1.53|
|bigbench_logical_deduction_three_objects        |      0|multiple_choice_grade|46.67|±  |  2.89|
|bigbench_movie_recommendation                   |      0|multiple_choice_grade|41.80|±  |  2.21|
|bigbench_navigate                               |      0|multiple_choice_grade|64.00|±  |  1.52|
|bigbench_reasoning_about_colored_objects        |      0|multiple_choice_grade|60.00|±  |  1.10|
|bigbench_ruin_names                             |      0|multiple_choice_grade|39.96|±  |  2.32|
|bigbench_salient_translation_error_detection    |      0|multiple_choice_grade|47.90|±  |  1.58|
|bigbench_snarks                                 |      0|multiple_choice_grade|64.09|±  |  3.58|
|bigbench_sports_understanding                   |      0|multiple_choice_grade|71.10|±  |  1.44|
|bigbench_temporal_sequences                     |      0|multiple_choice_grade|59.90|±  |  1.55|
|bigbench_tracking_shuffled_objects_five_objects |      0|multiple_choice_grade|24.96|±  |  1.22|
|bigbench_tracking_shuffled_objects_seven_objects|      0|multiple_choice_grade|17.89|±  |  0.92|
|bigbench_tracking_shuffled_objects_three_objects|      0|multiple_choice_grade|46.67|±  |  2.89|

Average: 45.76%

Average score: 60.09%

Elapsed time: 02:10:16