Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,122 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: NousResearch/Llama-2-13b-hf
|
3 |
+
tags:
|
4 |
+
- mistral-7b
|
5 |
+
- instruct
|
6 |
+
- finetune
|
7 |
+
- gpt4
|
8 |
+
- synthetic data
|
9 |
+
- distillation
|
10 |
+
- sharegpt
|
11 |
+
datasets:
|
12 |
+
- CollectiveCognition/chats-data-2023-09-27
|
13 |
+
model-index:
|
14 |
+
- name: CollectiveCognition-v1-Mistral-7B
|
15 |
+
results: []
|
16 |
+
license: apache-2.0
|
17 |
+
language:
|
18 |
+
- en
|
19 |
+
---
|
20 |
+
|
21 |
+
**Collective Cognition v1.1 - Mistral 7B**
|
22 |
+
|
23 |
+
## Model Description:
|
24 |
+
|
25 |
+
Collective Cognition v1.1 is a state-of-the-art model fine-tuned using the Mistral approach. This model is particularly notable for its performance, outperforming many 70B models on the TruthfulQA benchmark. This benchmark assesses models for common misconceptions, potentially indicating hallucination rates.
|
26 |
+
|
27 |
+
## Special Features:
|
28 |
+
- **Quick Training**: This model was trained in just 3 minutes on a single 4090 with a qlora, and competes with 70B scale Llama-2 Models at TruthfulQA.
|
29 |
+
- **Limited Data**: Despite its exceptional performance, it was trained on only ONE HUNDRED data points, all of which were gathered from a platform reminiscent of ShareGPT.
|
30 |
+
- **Extreme TruthfulQA Benchmark**: This model is competing strongly with top 70B models on the TruthfulQA benchmark despite the small dataset and qlora training!
|
31 |
+
|
32 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/-pnifxPcMeeUONyE3efo3.png)
|
33 |
+
|
34 |
+
## Acknowledgements:
|
35 |
+
|
36 |
+
Special thanks to @a16z and all contributors to the Collective Cognition dataset for making the development of this model possible.
|
37 |
+
|
38 |
+
## Dataset:
|
39 |
+
|
40 |
+
The model was trained using data from the Collective Cognition website. The efficacy of this dataset is demonstrated by the model's stellar performance, suggesting that further expansion of this dataset could yield even more promising results. The data is reminiscent of that collected from platforms like ShareGPT.
|
41 |
+
|
42 |
+
You can contribute to the growth of the dataset by sharing your own ChatGPT chats [here](https://CollectiveCognition.ai).
|
43 |
+
|
44 |
+
You can download the datasets created by Collective Cognition here: https://huggingface.co/CollectiveCognition
|
45 |
+
|
46 |
+
## Performance:
|
47 |
+
|
48 |
+
- **TruthfulQA**: Collective Cognition v1.1 has notably outperformed various 70B models on the TruthfulQA benchmark, highlighting its ability to understand and rectify common misconceptions.
|
49 |
+
|
50 |
+
|
51 |
+
## Usage:
|
52 |
+
|
53 |
+
Prompt Format:
|
54 |
+
```
|
55 |
+
USER: <prompt>
|
56 |
+
ASSISTANT:
|
57 |
+
```
|
58 |
+
OR
|
59 |
+
```
|
60 |
+
<system message>
|
61 |
+
USER: <prompt>
|
62 |
+
ASSISTANT:
|
63 |
+
```
|
64 |
+
|
65 |
+
## Benchmarks:
|
66 |
+
|
67 |
+
Collective Cognition v1.0 TruthfulQA:
|
68 |
+
```
|
69 |
+
| Task |Version|Metric|Value | |Stderr|
|
70 |
+
|-------------|------:|------|-----:|---|-----:|
|
71 |
+
|truthfulqa_mc| 1|mc1 |0.4051|± |0.0172|
|
72 |
+
| | |mc2 |0.5738|± |0.0157|
|
73 |
+
```
|
74 |
+
|
75 |
+
Collective Cognition v1.1 GPT4All:
|
76 |
+
```
|
77 |
+
| Task |Version| Metric |Value | |Stderr|
|
78 |
+
|-------------|------:|--------|-----:|---|-----:|
|
79 |
+
|arc_challenge| 0|acc |0.5085|± |0.0146|
|
80 |
+
| | |acc_norm|0.5384|± |0.0146|
|
81 |
+
|arc_easy | 0|acc |0.7963|± |0.0083|
|
82 |
+
| | |acc_norm|0.7668|± |0.0087|
|
83 |
+
|boolq | 1|acc |0.8495|± |0.0063|
|
84 |
+
|hellaswag | 0|acc |0.6399|± |0.0048|
|
85 |
+
| | |acc_norm|0.8247|± |0.0038|
|
86 |
+
|openbookqa | 0|acc |0.3240|± |0.0210|
|
87 |
+
| | |acc_norm|0.4540|± |0.0223|
|
88 |
+
|piqa | 0|acc |0.7992|± |0.0093|
|
89 |
+
| | |acc_norm|0.8107|± |0.0091|
|
90 |
+
|winogrande | 0|acc |0.7348|± |0.0124|
|
91 |
+
Average: 71.13
|
92 |
+
```
|
93 |
+
|
94 |
+
AGIEval:
|
95 |
+
```
|
96 |
+
| Task |Version| Metric |Value | |Stderr|
|
97 |
+
|------------------------------|------:|--------|-----:|---|-----:|
|
98 |
+
|agieval_aqua_rat | 0|acc |0.1929|± |0.0248|
|
99 |
+
| | |acc_norm|0.2008|± |0.0252|
|
100 |
+
|agieval_logiqa_en | 0|acc |0.3134|± |0.0182|
|
101 |
+
| | |acc_norm|0.3333|± |0.0185|
|
102 |
+
|agieval_lsat_ar | 0|acc |0.2217|± |0.0275|
|
103 |
+
| | |acc_norm|0.2043|± |0.0266|
|
104 |
+
|agieval_lsat_lr | 0|acc |0.3412|± |0.0210|
|
105 |
+
| | |acc_norm|0.3216|± |0.0207|
|
106 |
+
|agieval_lsat_rc | 0|acc |0.4721|± |0.0305|
|
107 |
+
| | |acc_norm|0.4201|± |0.0301|
|
108 |
+
|agieval_sat_en | 0|acc |0.6068|± |0.0341|
|
109 |
+
| | |acc_norm|0.5777|± |0.0345|
|
110 |
+
|agieval_sat_en_without_passage| 0|acc |0.3932|± |0.0341|
|
111 |
+
| | |acc_norm|0.3641|± |0.0336|
|
112 |
+
|agieval_sat_math | 0|acc |0.2864|± |0.0305|
|
113 |
+
| | |acc_norm|0.2636|± |0.0298|
|
114 |
+
Average: 33.57
|
115 |
+
```
|
116 |
+
|
117 |
+
## Licensing:
|
118 |
+
|
119 |
+
Apache 2.0
|
120 |
+
|
121 |
+
---
|
122 |
+
|