lvkaokao
commited on
Commit
•
1b2b346
1
Parent(s):
fac83ab
update doc.
Browse files
README.md
CHANGED
@@ -4,6 +4,53 @@ license: apache-2.0
|
|
4 |
|
5 |
## Fine-tuning on Intel Gaudi2
|
6 |
|
7 |
-
|
8 |
|
9 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
|
5 |
## Fine-tuning on Intel Gaudi2
|
6 |
|
7 |
+
This model is a fine-tuned model based on [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1) on the [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA) dataset. And make a alignment using DPO method with [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs). For more details about [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1), you can refer the [model card](https://huggingface.co/Intel/neural-chat-7b-v3-1) and our blog [The Practice of Supervised Fine-tuning and Direct Preference Optimization on Intel Gaudi2](https://medium.com/@NeuralCompressor/the-practice-of-supervised-finetuning-and-direct-preference-optimization-on-habana-gaudi2-a1197d8a3cd3)
|
8 |
|
9 |
+
**Note:** Adjust lora modules to trade off truthfulqa and gsm8k performance on DPO stage.
|
10 |
+
|
11 |
+
## Model date
|
12 |
+
Neural-chat-7b-v3-3 was trained at December, 2023.
|
13 |
+
|
14 |
+
### Training sample code
|
15 |
+
Here is the sample code to reproduce the model: [Sample Code](https://github.com/intel/intel-extension-for-transformers/blob/main/intel_extension_for_transformers/neural_chat/examples/finetuning/finetune_neuralchat_v3/README.md).
|
16 |
+
|
17 |
+
## Prompt Template
|
18 |
+
|
19 |
+
```
|
20 |
+
### System:
|
21 |
+
{system}
|
22 |
+
### User:
|
23 |
+
{usr}
|
24 |
+
### Assistant:
|
25 |
+
|
26 |
+
```
|
27 |
+
|
28 |
+
## Ethical Considerations and Limitations
|
29 |
+
neural-chat-7b-v3-3 can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
|
30 |
+
|
31 |
+
Therefore, before deploying any applications of neural-chat-7b-v3-3, developers should perform safety testing.
|
32 |
+
|
33 |
+
## Disclaimer
|
34 |
+
|
35 |
+
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
|
36 |
+
|
37 |
+
## Organizations developing the model
|
38 |
+
|
39 |
+
The NeuralChat team with members from Intel/DCAI/AISE/AIPT. Core team members: Kaokao Lv, Liang Lv, Chang Wang, Wenxin Zhang, Xuhui Ren, and Haihao Shen.
|
40 |
+
|
41 |
+
## Useful links
|
42 |
+
* Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
|
43 |
+
* Intel Extension for Transformers [link](https://github.com/intel/intel-extension-for-transformers)
|
44 |
+
|
45 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
46 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Intel__neural-chat-7b-v3-1) (**note:** the leaderboard removes drop task)
|
47 |
+
|
48 |
+
| Metric | Value |
|
49 |
+
|-----------------------|---------------------------|
|
50 |
+
| Avg. | 69.83 |
|
51 |
+
| ARC (25-shot) | 66.89 |
|
52 |
+
| HellaSwag (10-shot) | 85.26 |
|
53 |
+
| MMLU (5-shot) | 63.07 |
|
54 |
+
| TruthfulQA (0-shot) | 63.01 |
|
55 |
+
| Winogrande (5-shot) | 79.64 |
|
56 |
+
| GSM8K (5-shot) | 61.11 |
|