improve readme
Browse files
README.md
CHANGED
@@ -49,7 +49,7 @@ gemma-2b-orpo performs well for its size on Nous' benchmark suite.
|
|
49 |
## ๐ Dataset
|
50 |
[`alvarobartt/dpo-mix-7k-simplified`](https://huggingface.co/datasets/alvarobartt/dpo-mix-7k-simplified)
|
51 |
is a simplified version of [`argilla/dpo-mix-7k`](https://huggingface.co/datasets/argilla/dpo-mix-7k).
|
52 |
-
You can find more information [
|
53 |
|
54 |
## ๐ฎ Model in action
|
55 |
### Usage notebook
|
|
|
49 |
## ๐ Dataset
|
50 |
[`alvarobartt/dpo-mix-7k-simplified`](https://huggingface.co/datasets/alvarobartt/dpo-mix-7k-simplified)
|
51 |
is a simplified version of [`argilla/dpo-mix-7k`](https://huggingface.co/datasets/argilla/dpo-mix-7k).
|
52 |
+
You can find more information [in the dataset card](https://huggingface.co/datasets/alvarobartt/dpo-mix-7k-simplified).
|
53 |
|
54 |
## ๐ฎ Model in action
|
55 |
### Usage notebook
|