fernandofernandes
commited on
Commit
•
6e00141
1
Parent(s):
b738379
Update README.md
Browse files
README.md
CHANGED
@@ -30,7 +30,8 @@ This Dolphin is *really good* at coding, I trained with a lot of coding data. I
|
|
30 |
On the other hand, you might still need to encourage it in the system prompt as shown in the below examples.
|
31 |
|
32 |
|
33 |
-
New in 2.6 - DPO
|
|
|
34 |
DPO tuned on argilla/ultrafeedback-binarized-preferences-cleaned
|
35 |
|
36 |
This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models
|
|
|
30 |
On the other hand, you might still need to encourage it in the system prompt as shown in the below examples.
|
31 |
|
32 |
|
33 |
+
## New in 2.6 - DPO
|
34 |
+
|
35 |
DPO tuned on argilla/ultrafeedback-binarized-preferences-cleaned
|
36 |
|
37 |
This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models
|