Add link to `-aligned` model version in README.md
#2
by
NickyHavoc
- opened
README.md
CHANGED
@@ -8,7 +8,7 @@ pipeline_tag: text-generation
|
|
8 |
|
9 |
This model card provides an overview of the **Pharia-1-LLM-7B** model family, which encompasses two foundation models developed by Aleph Alpha Research\*. They are publicly available under the [Open Aleph License](https://github.com/Aleph-Alpha/.github/blob/main/oal.pdf), a license explicitly allowing for non-commercial research and educational use.
|
10 |
|
11 |
-
Pharia-1-LLM-7B comes in two distinct variants, `Pharia-1-LLM-7B-control` and `Pharia-1-LLM-7B-control-aligned
|
12 |
|
13 |
# Model Overview
|
14 |
|
|
|
8 |
|
9 |
This model card provides an overview of the **Pharia-1-LLM-7B** model family, which encompasses two foundation models developed by Aleph Alpha Research\*. They are publicly available under the [Open Aleph License](https://github.com/Aleph-Alpha/.github/blob/main/oal.pdf), a license explicitly allowing for non-commercial research and educational use.
|
10 |
|
11 |
+
Pharia-1-LLM-7B comes in two distinct variants, `Pharia-1-LLM-7B-control` and [`Pharia-1-LLM-7B-control-aligned`](https://huggingface.co/Aleph-Alpha/Pharia-1-LLM-7B-control-aligned). Due to being trained on a multilingual corpus, both models are culturally and linguistically optimized for German, French and Spanish. The Pharia-1-LLM-7B models were trained on carefully curated data in compliance with applicable EU and national regulations, including copyright and data privacy laws. With improved token efficiency, the Pharia-1-LLM-7B-control models excel in domain-specific applications, particularly in the automotive and engineering industries. As such, they serve as a valuable complement to the community's selection of weight-available foundation models. `Pharia-1-LLM-7B-control` is engineered to deliver concise, length-controlled responses that match the performance of leading open-source models in the 7B to 8B parameter range. `Pharia-1-LLM-7B-control` can be aligned to user preferences, making it suitable for critical applications without the risk of shutdown behavior. `Pharia-1-LLM-7B-control-aligned` has received additional alignment training to mitigate the risks associated with using the model.
|
12 |
|
13 |
# Model Overview
|
14 |
|