Typo with version
#4
by
Ichsan2895
- opened
README.md
CHANGED
@@ -7,7 +7,7 @@ license: apache-2.0
|
|
7 |
This model is a fine-tuned model based on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the open source dataset [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca). Then we align it with DPO algorithm. For more details, you can refer our blog: [NeuralChat: Simplifying Supervised Instruction Fine-Tuning and Reinforcement Aligning](https://medium.com/intel-analytics-software/neuralchat-simplifying-supervised-instruction-fine-tuning-and-reinforcement-aligning-for-chatbots-d034bca44f69).
|
8 |
|
9 |
## Model date
|
10 |
-
Neural-chat-7b-v3 was trained between September and October, 2023.
|
11 |
|
12 |
## Evaluation
|
13 |
|
@@ -43,14 +43,14 @@ The following hyperparameters were used during training:
|
|
43 |
```shell
|
44 |
import transformers
|
45 |
model = transformers.AutoModelForCausalLM.from_pretrained(
|
46 |
-
'Intel/neural-chat-7b-v3'
|
47 |
)
|
48 |
```
|
49 |
|
50 |
## Ethical Considerations and Limitations
|
51 |
-
neural-chat-7b-v3 can produce factually incorrect output, and should not be relied on to produce factually accurate information. neural-chat-7b-v3 was trained on [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) based on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1). Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
|
52 |
|
53 |
-
Therefore, before deploying any applications of neural-chat-7b-v3, developers should perform safety testing.
|
54 |
|
55 |
## Disclaimer
|
56 |
|
|
|
7 |
This model is a fine-tuned model based on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the open source dataset [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca). Then we align it with DPO algorithm. For more details, you can refer our blog: [NeuralChat: Simplifying Supervised Instruction Fine-Tuning and Reinforcement Aligning](https://medium.com/intel-analytics-software/neuralchat-simplifying-supervised-instruction-fine-tuning-and-reinforcement-aligning-for-chatbots-d034bca44f69).
|
8 |
|
9 |
## Model date
|
10 |
+
Neural-chat-7b-v3-1 was trained between September and October, 2023.
|
11 |
|
12 |
## Evaluation
|
13 |
|
|
|
43 |
```shell
|
44 |
import transformers
|
45 |
model = transformers.AutoModelForCausalLM.from_pretrained(
|
46 |
+
'Intel/neural-chat-7b-v3-1'
|
47 |
)
|
48 |
```
|
49 |
|
50 |
## Ethical Considerations and Limitations
|
51 |
+
neural-chat-7b-v3-1 can produce factually incorrect output, and should not be relied on to produce factually accurate information. neural-chat-7b-v3 was trained on [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) based on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1). Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
|
52 |
|
53 |
+
Therefore, before deploying any applications of neural-chat-7b-v3-1, developers should perform safety testing.
|
54 |
|
55 |
## Disclaimer
|
56 |
|