File size: 1,661 Bytes
25295a7 4da013b 25295a7 0d2cec9 e93e246 25295a7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
---
language:
- en
- de
- fr
- zh
- pt
- nl
- ru
- ko
- it
- es
license: cc-by-nc-4.0
tags:
- mlx
metrics:
- comet
pipeline_tag: translation
---
# mlx-community/TowerInstruct-v0.1-bfloat16-mlx
This model was converted to MLX format from [`Unbabel/TowerInstruct-v0.1`]().
Refer to the [original model card](https://huggingface.co/Unbabel/TowerInstruct-v0.1) for more details on the model.
## Intended uses & limitations (from the [original model card](https://huggingface.co/Unbabel/TowerInstruct-v0.1))
The model was initially fine-tuned on a filtered and preprocessed supervised fine-tuning dataset ([TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1)), which contains a diverse range of data sources:
- Translation (sentence and paragraph-level)
- Automatic Post Edition
- Machine Translation Evaluation
- Context-aware Translation
- Terminology-aware Translation
- Multi-reference Translation
- Named-entity Recognition
- Paraphrase Generation
- Synthetic Chat data
- Code instructions
You can find the dataset and all data sources of [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1) here.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/TowerInstruct-v0.1-bfloat16-mlx")
prompt="Translate the following text from Portuguese into French.\nPortuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.\nFrench:"
response = generate(model, tokenizer, prompt=prompt, verbose=True)
# Un groupe d'investigateurs a lancé un nouveau modèle pour les tâches liées à la traduction.
```
|